Three Worlds Decide (5/8)
(Part 5 of 8 in “Three Worlds Collide”)
Akon strode into the main Conference Room; and though he walked like a physically exhausted man, at least his face was determined. Behind him, the shadowy Confessor followed.
The Command Conference looked up at him, and exchanged glances.
“You look better,” the Ship’s Master of Fandom ventured.
Akon put a hand on the back of his seat, and paused. Someone was absent. “The Ship’s Engineer?”
The Lord Programmer frowned. “He said he had an experiment to run, my lord. He refused to clarify further, but I suppose it must have something to do with the Babyeaters’ data—”
“You’re joking,” Akon said. “Our Ship’s Engineer is off Nobel-hunting? Now? With the fate of the human species at stake?”
The Lord Programmer shrugged. “He seemed to think it was important, my lord.”
Akon sighed. He pulled his chair back and half-slid, half-fell into it. “I don’t suppose that the ship’s markets have settled down?”
The Lord Pilot grinned sardonically. “Read for yourself.”
Akon twitched, calling up a screen. “Ah, I see. The ship’s Interpreter of the Market’s Will reports, and I quote, ‘Every single one of the underlying assets in my market is going up and down like a fucking yo-yo while the ship’s hedgers try to adjust to a Black Swan that’s going to wipe out ninety-eight percent of their planetside risk capital. Even the spot prices on this ship are going crazy; either we’ve got bubble traders coming out of the woodwork, or someone seriously believes that sex is overvalued relative to orange juice. One derivatives trader says she’s working on a contract that will have a clearly defined value in the event that aliens wipe out the entire human species, but she says it’s going to take a few hours and I say she’s on crack. Indeed I believe an actual majority of the people still trying to trade in this environment are higher than the heliopause. Bid-ask spreads are so wide you could kick a fucking football stadium through them, nothing is clearing, and I have unisolated conditional dependencies coming out of my ass. I have no fucking clue what the market believes. Someone get me a drink.’ Unquote.” Akon looked at the Master of Fandom. “Any suggestions get reddited up from the rest of the crew?”
The Master cleared his throat. “My lord, we took the liberty of filtering out everything that was physically impossible, based on pure wishful thinking, or displayed a clear misunderstanding of naturalistic metaethics. I can show you the raw list, if you’d like.”
“And what’s left?” Akon said. “Oh, never mind, I get it.”
“Well, not quite,” said the Master. “To summarize the best ideas—” He gestured a small holo into existence.
Ask the Superhappies if their biotechnology is capable of in vivo cognitive alterations of Babyeater children to ensure that they don’t grow up wanting to eat their own children. Sterilize the current adults. If Babyeater adults cannot be sterilized and will not surrender, imprison them. If that’s too expensive, kill most of them, but leave enough in prison to preserve their culture for the children. Offer the Superhappies an alliance to invade the Babyeaters, in which we provide the capital and labor and they provide the technology.
“Not too bad,” Akon said. His voice grew somewhat dry. “But it doesn’t seem to address the question of what the Superhappies are supposed to do with us. The analogous treatment—”
“Yes, my lord,” the Master said. “That was extensively pointed out in the comments, my lord. And the other problem is that the Superhappies don’t really need our labor or our capital.” The Master looked in the direction of the Lord Programmer, the Xenopsychologist, and the Lady Sensory.
The Lord Programmer said, “My lord, I believe the Superhappies think much faster than we do. If their cognitive systems are really based on something more like DNA than like neurons, that shouldn’t be surprising. In fact, it’s surprising that the speedup is as little as -” The Lord Programmer stopped, and swallowed. “My lord. The Superhappies responded to most of our transmissions extremely quickly. There was, however, a finite delay. And that delay was roughly proportional to the length of the response, plus an additive constant. Going by the proportion, my lord, I believe they think between fifteen and thirty times as fast as we do, to the extent such a comparison can be made. If I try to use Moore’s Law type reasoning on some of the observable technological parameters in their ship—Alderson flux, power density, that sort of thing—then I get a reasonably convergent estimate that the aliens are two hundred years ahead of us in human-equivalent subjective time. Which means it would be twelve hundred equivalent years since their Scientific Revolution.”
“If,” the Xenopsychologist said, “their history went as slowly as ours. It probably didn’t.” The Xenopsychologist took a breath. “My lord, my suspicion is that the aliens are literally able to run their entire ship using only three kiritsugu as sole crew. My lord, this may represent, not only the superior programming ability that translated their communications to us, but also the highly probable case that Superhappies can trade knowledge and skills among themselves by having sex. Every individual of their species might contain the memory of their Einsteins and Newtons and a thousand other areas of expertise, no more conserved than DNA is conserved among humans. My lord, I suspect their version of Galileo was something like thirty objective years ago, as the stars count time, and that they’ve been in space for maybe twenty years.”
The Lady Sensory said, “Their ship has a plane of symmetry, and it’s been getting wider on the axis through that plane, as it sucks up nova dust and energy. It’s growing on a smooth exponential at 2% per hour, which means it can split every thirty-five hours in this environment.”
“I have no idea,” the Xenopsychologist said, “how fast the Superhappies can reproduce themselves—how many children they have per generation, or how fast their children sexually mature. But all things considered, I don’t think we can count on their kids taking twenty years to get through high school.”
There was silence.
When Akon could speak again, he said, “Are you all quite finished?”
“If they let us live,” the Lord Programmer said, “and if we can work out a trade agreement with them under Ricardo’s Law of Comparative Advantage, interest rates will—”
“Interest rates can fall into an open sewer and die. Any further transmissions from the Superhappy ship?”
The Lady Sensory shook her head.
“All right,” Akon said. “Open a transmission channel to them.”
There was a stir around the table. “My lord—” said the Master of Fandom. “My lord, what are you going to say?”
Akon smiled wearily. “I’m going to ask them if they have any options to offer us.”
The Lady Sensory looked at the Ship’s Confessor. The hood silently nodded: He’s still sane.
The Lady Sensory swallowed, and opened a channel. On the holo there first appeared, as a screen:
The Lady 3rd Kiritsugu
temporary co-chair of the Gameplayer
Language Translator version 9
Cultural Translator version 16
The Lady 3rd in this translation was slightly less pale, and looked a bit more concerned and sympathetic. She took in Akon’s appearance at a glance, and her eyes widened in alarm. “My lord, you’re hurting!”
“Just tired, milady,” Akon said. He cleared his throat. “Our ship’s decision-making usually relies on markets and our markets are behaving erratically. I’m sorry to inflict that on you as shared pain, and I’ll try to get this over with quickly. Anyway—”
Out of the corner of his eye, Akon saw the Ship’s Engineer re-enter the room; the Engineer looked as if he had something to say, but froze when he saw the holo.
There was no time for that now.
“Anyway,” Akon said, “we’ve worked out that the key decisions depend heavily on your level of technology. What do you think you can actually do with us or the Babyeaters?”
The Lady 3rd sighed. “I really should get your independent component before giving you ours—you should at least think of it first—but I suppose we’re out of luck on that. How about if I just tell you what we’re currently planning?”
Akon nodded. “That would be much appreciated, milady.” Some of his muscles that had been tense, started to relax. Cultural Translator version 16 was a lot easier on his brain. Distantly, he wondered if some transformed avatar of himself was making skillful love to the Lady 3rd -
“All right,” the Lady 3rd said. “We consider that the obvious starting point upon which to build further negotiations, is to combine and compromise the utility functions of the three species until we mutually satisfice, providing compensation for all changes demanded. The Babyeaters must compromise their values to eat their children at a stage where they are not sentient—we might accomplish this most effectively by changing the lifecycle of the children themselves. We can even give the unsentient children an instinct to flee and scream, and generate simple spoken objections, but prevent their brain from developing self-awareness until after the hunt.”
Akon straightened. That actually sounded—quite compassionate—sort of -
“Our own two species,” the Lady 3rd said, “which desire this change of the Babyeaters, will compensate them by adopting Babyeater values, making our own civilization of greater utility in their sight: we will both change to spawn additional infants, and eat most of them at almost the last stage before they become sentient.”
The Conference room was frozen. No one moved. Even their faces didn’t change expression.
Akon’s mind suddenly flashed back to those writhing, interpenetrating, visually painful blobs he had seen before.
A cultural translator could change the image, but not the reality.
“It is nonetheless probable,” continued the Lady 3rd, “that the Babyeaters will not accept this change as it stands; it will be necessary to impose these changes by force. As for you, humankind, we hope you will be more reasonable. But both your species, and the Babyeaters, must relinquish bodily pain, embarrassment, and romantic troubles. In exchange, we will change our own values in the direction of yours. We are willing to change to desire pleasure obtained in more complex ways, so long as the total amount of our pleasure does not significantly decrease. We will learn to create art you find pleasing. We will acquire a sense of humor, though we will not lie. From the perspective of humankind and the Babyeaters, our civilization will obtain much utility in your sight, which it did not previously possess. This is the compensation we offer you. We furthermore request that you accept from us the gift of untranslatable 2, which we believe will enhance, on its own terms, the value that you name ‘love’. This will also enable our kinds to have sex using mechanical aids, which we greatly desire. At the end of this procedure, all three species will satisfice each other’s values and possess great common ground, upon which we may create a civilization together.”
Akon slowly nodded. It was all quite unbelievably civilized. It might even be the categorically best general procedure when worlds collided.
The Lady 3rd brightened. “A nod—is that assent, humankind?”
“It’s acknowledgment,” Akon said. “We’ll have to think about this.”
“I understand,” the Lady 3rd said. “Please think as swiftly as you can. Babyeater children are dying in horrible agony as you think.”
“I understand,” Akon said in return, and gestured to cut the transmission.
The holo blinked out.
There was a long, terrible silence.
“No.”
The Lord Pilot said it. Cold, flat, absolute.
There was another silence.
“My lord,” the Xenopsychologist said, very softly, as though afraid the messenger would be torn apart and dismembered, “I do not think they were offering us that option.”
“Actually,” Akon said, “The Superhappies offered us more than we were going to offer the Babyeaters. We weren’t exactly thinking about how to compensate them.” It was strange, Akon noticed, his voice was very calm, maybe even deadly calm. “The Superhappies really are a very fair-minded people. You get the impression they would have proposed exactly the same solution whether or not they happened to hold the upper hand. We might have just enforced our own will on the Babyeaters and told the Superhappies to take a hike. If we’d held the upper hand. But we don’t. And that’s that, I guess.”
“No!” shouted the Lord Pilot. “That’s not—”
Akon looked at him, still with that deadly calm.
The Lord Pilot was breathing deeply, not as if quieting himself, but as if preparing for battle on some ancient savanna plain that no longer existed. “They want to turn us into something inhuman. It—it cannot—we cannot—we must not allow—”
“Either give us a better option or shut up,” the Lord Programmer said flatly. “The Superhappies are smarter than us, have a technological advantage, think faster, and probably reproduce faster. We have no hope of holding them off militarily. If our ships flee, the Superhappies will simply follow in faster ships. There’s no way to shut a starline once opened, and no way to conceal the fact that it is open—”
“Um,” the Ship’s Engineer said.
Every eye turned to him.
“Um,” the Ship’s Engineer said. “My Lord Administrator, I must report to you in private.”
The Ship’s Confessor shook his head. “You could have handled that better, Engineer.”
Akon nodded to himself. It was true. The Ship’s Engineer had already betrayed the fact that a secret existed. Under the circumstances, easy to deduce that it had come from the Babyeater data. That was eighty percent of the secret right there. And if it was relevant to starline physics, that was half of the remainder.
“Engineer,” Akon said, “since you have already revealed that a secret exists, I suggest you tell the full Command Conference. We need to stay in sync with each other. Two minds are not a committee. We’ll worry later about keeping the secret classified.”
The Ship’s Engineer hesitated. “Um, my lord, I suggest that I report to you first, before you decide—”
“There’s no time,” Akon said. He pointed to where the holo had been.
“Yes,” the Master of Fandom said, “we can always slit our own throats afterward, if the secret is that awful.” The Master of Fandom gave a small laugh -
- then stopped, at the look on the Engineer’s face.
“At your will, my lord,” the Engineer said.
He drew a deep breath. “I asked the Lord Programmer to compare any identifiable equations and constants in the Babyeater’s scientific archive, to the analogous scientific data of humanity. Most of the identified analogues were equal, of course. In some places we have more precise values, as befits our, um, superior technological level. But one anomaly did turn up: the Babyeater figure for Alderson’s Coupling Constant was ten orders of magnitude larger than our own.”
The Lord Pilot whistled. “Stars above, how did they manage to make that mistake—”
Then the Lord Pilot stopped abruptly.
“Alderson’s Coupling Constant,” Akon echoed. “That’s the… coupling between Alderson interactions and the...”
“Between Alderson interactions and the nuclear strong force,” the Lord Pilot said. He was beginning to smile, rather grimly. “It was a free parameter in the standard model, and so had to be established experimentally. But because the interaction is so incredibly… weak… they had to build an enormous Alderson generator to find the value. The size of a very small moon, just to give us that one number. Definitely not something you could check at home. That’s the story in the physics textbooks, my lords, my lady.”
The Master of Fandom frowned. “You’re saying… the physicists faked the result in order to… fund a huge project...?” He looked puzzled.
“No,” the Lord Pilot said. “Not for the love of power. Engineer, the Babyeater value should be testable using our own ship’s Alderson drive, if the coupling constant is that strong. This you have done?”
The Ship’s Engineer nodded. “The Babyeater value is correct, my lord.”
The Ship’s Engineer was pale. The Lord Pilot was clenching his jaw into a sardonic grin.
“Please explain,” Akon said. “Is the universe going to end in another billion years, or something? Because if so, the issue can wait—”
“My lord,” the Ship’s Confessor said, “suppose the laws of physics in our universe had been such that the ancient Greeks could invent the equivalent of nuclear weapons from materials just lying around. Imagine the laws of physics had permitted a way to destroy whole countries with no more difficulty than mixing gunpowder. History would have looked quite different, would it not?”
Akon nodded, puzzled. “Well, yes,” Akon said. “It would have been shorter.”
“Aren’t we lucky that physics didn’t happen to turn out that way, my lord? That in our own time, the laws of physics don’t permit cheap, irresistable superweapons?”
Akon furrowed his brow -
“But my lord,” said the Ship’s Confessor, “do we really know what we think we know? What different evidence would we see, if things were otherwise? After all—if you happened to be a physicist, and you happened to notice an easy way to wreak enormous destruction using off-the-shelf hardware—would you run out and tell you?”
“No,” Akon said. A sinking feeling was dawning in the pit of his stomach. “You would try to conceal the discovery, and create a cover story that discouraged anyone else from looking there.”
The Lord Pilot emitted a bark that was half laughter, and half something much darker. “It was perfect. I’m a Lord Pilot and I never suspected until now.”
“So?” Akon said. “What is it, actually?”
“Um,” the Ship’s Engineer said. “Well… basically… to skip over the technical details...”
The Ship’s Engineer drew a breath.
“Any ship with a medium-sized Alderson drive can make a star go supernova.”
Silence.
“Which might seem like bad news in general,” the Lord Pilot said, “but from our perspective, right here, right now, it’s just what we need. A mere nova wouldn’t do it. But blowing up the whole star - ” He gave that bitter bark of laughter, again. “No star, no starlines. We can make the main star of this system go supernova—not the white dwarf, the companion. And then the Superhappies won’t be able to get to us. That is, they won’t be able to get to the human starline network. We will be dead. If you care about tiny irrelevant details like that.” The Lord Pilot looked around the Conference Table. “Do you care? The correct answer is no, by the way.”
“I care,” the Lady Sensory said softly. “I care a whole lot. But...” She folded her hands atop the table and bowed her head.
There were nods from around the Table.
The Lord Pilot looked at the Ship’s Engineer. “How long will it take for you to modify the ship’s Alderson Drive—”
“It’s done,” said the Ship’s Engineer. “But… we should, um, wait until the Superhappies are gone, so they don’t detect us doing it.”
The Lord Pilot nodded. “Sounds like a plan. Well, that’s a relief. And here I thought the whole human race was doomed, instead of just us.” He looked inquiringly at Akon. “My lord?”
Akon rested his head in his hands, suddenly feeling more weary than he had ever felt in his life. From across the table, the Confessor watched him—or so it seemed; the hood was turned in his direction, at any rate.
I told you so, the Confessor did not say.
“There is a certain problem with your plan,” Akon said.
“Such as?” the Lord Pilot said.
“You’ve forgotten something,” Akon said. “Something terribly important. Something you once swore you would protect.”
Puzzled faces looked at him.
“If you say something bloody ridiculous like ‘the safety of the ship’ -” said the Lord Pilot.
The Lady Sensory gasped. “Oh, no,” she murmured. “Oh, no. The Babyeater children.”
The Lord Pilot looked like he had been punched in the stomach. The grim smiles that had begun to spread around the table were replaced with horror.
“Yes,” Akon said. He looked away from the Conference Table. He didn’t want to see the reactions. “The Superhappies wouldn’t be able to get to us. And they couldn’t get to the Babyeaters either. Neither could we. So the Babyeaters would go on eating their own children indefinitely. And the children would go on dying over days in their parents’ stomachs. Indefinitely. Is the human race worth that?”
Akon looked back at the Table, just once. The Xenopsychologist looked sick, tears were running down the Master’s face, and the Lord Pilot looked like he were being slowly torn in half. The Lord Programmer looked abstracted, the Lady Sensory was covering her face with her hands. (And the Confessor’s face still lay in shadow, beneath the silver hood.)
Akon closed his eyes. “The Superhappies will transform us into something not human,” Akon said. “No, let’s be frank. Something less than human. But not all that much less than human. We’ll still have art, and stories, and love. I’ve gone entire hours without being in pain, and on the whole, it wasn’t that bad an experience—” The words were sticking in his throat, along with a terrible fear. “Well. Anyway. If remaining whole is that important to us—we have the option. It’s just a question of whether we’re willing to pay the price. Sacrifice the Babyeater children—”
They’re a lot like human children, really.
“—to save humanity.”
Someone in the darkness was screaming, a thin choked wail that sounded like nothing Akon had ever heard or wanted to hear. Akon thought it might be the Lord Pilot, or the Master of Fandom, or maybe the Ship’s Engineer. He didn’t open his eyes to find out.
There was a chime.
“In-c-c-coming c-call from the Super Happy,” the Lady Sensory spit out the words like acid, “ship, my lord.”
Akon opened his eyes, and felt, somehow, that he was still in darkness.
“Receive,” Akon said.
The Lady 3rd Kiritsugu appeared before him. Her eyes widened once, as she took in his appearance, but she said nothing.
That’s right, my lady, I don’t look super happy.
“Humankind, we must have your answer,” she said simply.
The Lord Administrator pinched the bridge of his nose, and rubbed his eyes. Absurd, that one human being should have to answer a question like that. He wanted to foist off the decision on a committee, a majority vote of the ship, a market—something that wouldn’t demand that anyone accept full responsibility. But a ship run that way didn’t work well under ordinary circumstances, and there was no reason to think that things would change under extraordinary circumstances. He was an Administrator; he had to accept all the advice, integrate it, and decide. Experiment had shown that no organizational structure of non-Administrators could match what he was trained to do, and motivated to do; anything that worked was simply absorbed into the Administrative weighting of advice.
Sole decision. Sole responsibility if he got it wrong. Absolute power and absolute accountability, and never forget the second half, my lord, or you’ll be fired the moment you get home. Screw up indefensibly, my lord, and all your hundred and twenty years of accumulated salary in escrow, producing that lovely steady income, will vanish before you draw another breath.
Oh—and this time the whole human species will pay for it, too.
“I can’t speak for all humankind,” said the Lord Administrator. “I can decide, but others may decide differently. Do you understand?”
The Lady 3rd made a light gesture, as if it were of no consequence. “Are you an exceptional case of a human decision-maker?”
Akon tilted his head. “Not… particularly...”
“Then your decision is strongly indicative of what other human decisionmakers will decide,” she said. “I find it hard to imagine that the options exactly balance in your decision mechanism, whatever your inability to admit your own preferences.”
Akon slowly nodded. “Then...”
He drew a breath.
Surely, any species that reached the stars would understand the Prisoner’s Dilemma. If you couldn’t cooperate, you’d just destroy your own stars. A very easy thing to do, as it had turned out. By that standard, humanity might be something of an impostor next to the Babyeaters and the Superhappies. Humanity had kept it a secret from itself. The other two races—just managed not to do the stupid thing. You wouldn’t meet anyone out among the stars, otherwise.
The Superhappies had done their very best to press C. Cooperated as fairly as they could.
Humanity could only do the same.
“For myself, I am inclined to accept your offer.”
He didn’t look around to see how anyone had reacted to that.
“There may be other things,” Akon added, “that humanity would like to ask of your kind, when our representatives meet. Your technology is advanced beyond ours.”
The Lady 3rd smiled. “We will, of course, be quite positively inclined toward any such requests. As I believe our first message to you said - ‘we love you and we want you to be super happy’. Your joy will be shared by us, and we will be pleasured together.”
Akon couldn’t bring himself to smile. “Is that all?”
“This Babyeater ship,” said the Lady 3rd, “the one that did not fire on you, even though they saw you first. Are you therefore allied with them?”
“What?” Akon said without thinking. “No—”
“My lord!” shouted the Ship’s Confessor -
Too late.
“My lord,” the Lady Sensory said, her voice breaking, “the Superhappy ship has fired on the Babyeater vessel and destroyed it.”
Akon stared at the Lady 3rd in horror.
“I’m sorry,” the Lady 3rd Kiritsugu said. “But our negotiations with them failed, as predicted. Our own ship owed them nothing and promised them nothing. This will make it considerably easier to sweep through their starline network when we return. Their children would be the ones to suffer from any delay. You understand, my lord?”
“Yes,” Akon said, his voice trembling. “I understand, my lady kiritsugu.” He wanted to protest, to scream out. But the war was only beginning, and this—would admittedly save -
“Will you warn them?” the Lady 3rd asked.
“No,” Akon said. It was the truth.
“Transforming the Babyeaters will take precedence over transforming your own species. We estimate the Babyeater operation may take several weeks of your time to conclude. We hope you do not mind waiting. That is all,” the Lady 3rd said.
And the holo faded.
“The Superhappy ship is moving out,” the Lady Sensory said. She was crying, silently, as she steadily performed her duty of reporting. “They’re heading back toward their starline origin.”
“All right,” Akon said. “Take us home. We need to report on the negotiations—”
There was an inarticulate scream, like that throat was trying to burst the walls of the Conference chamber, as the Lord Pilot burst out of his chair, burst all restraints he had placed on himself, and lunged forward.
But standing behind his target, unnoticed, the Ship’s Confessor had produced from his sleeve the tiny stunner—the weapon which he alone on the ship was authorized to use, if he made a determination of outright mental breakdown. With a sudden motion, the Confessor’s arm swept out...
… [This option will become the True Ending only if someone suggests it in the comments before the previous ending is posted tomorrow. Otherwise, the first ending is the True one.]
- Three Worlds Collide (0/8) by 30 Jan 2009 12:07 UTC; 100 points) (
- Interlude with the Confessor (4/8) by 2 Feb 2009 9:11 UTC; 88 points) (
- Histocracy: Open, Effective Group Decision-Making With Weighted Voting by 17 Jan 2012 22:35 UTC; 19 points) (
- [SEQ RERUN] Three Worlds Decide (5/8) by 17 Feb 2013 6:28 UTC; 4 points) (
- 1 Nov 2023 18:03 UTC; 4 points) 's comment on Nathan Helm-Burger’s Shortform by (
- 4 Feb 2009 23:59 UTC; 3 points) 's comment on Normal Ending: Last Tears (6/8) by (
- 15 Oct 2012 23:37 UTC; 3 points) 's comment on Problem of Optimal False Information by (
- 31 Mar 2012 1:19 UTC; 2 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 13, chapter 81 by (
- 7 Jun 2010 18:03 UTC; 1 point) 's comment on Open Thread June 2010, Part 2 by (
But standing behind his target, unnoticed, the Ship’s Confessor had produced from his sleeve the tiny stunner—the weapon which he alone on the ship was authorized to use, if he made a determination of outright mental breakdown. With a sudden motion, the Confessor’s arm swept out...
… and anaesthetised everyone in the room. He then went downstairs to the engine room, and caused the sun to go supernova, blocking access to earth.
Regardless of his own preferences, he takes the option for humanity to ‘painlessly’ defect in inter-stellar prisoners dilemma, knowing apriori that the superhappys chose to co-operate.
Hmm. The three networks are otherwise disconnected from each other? And the Babyeaters are the first target?
Wait a week for a Superhappy fleet to make the jump into Babyeater space, then set off the bomb.
(Otherwise, yes, I would set off the bomb immediately.)
“[This option will become the True Ending only if someone suggests it in the comments before the previous ending is posted tomorrow. Otherwise, the first ending is the True one.]”
I’m not sure I understand what you mean. If no one chooses (2) does that mean that the (True) story ends with the Confessor stunning the Lord Pilot? …or does it continue after he’s stunned? …or have I gotten it all wrong?
Are the storylines like these:
(END)
(END)
or
(END)
(END)
@Anonymous Coward:Reasonable, except even by defecting you haven’t gained the substantially greater payoff that is the whole point of Prisoner’s Dilemma. In other words, like he asks: what about the Babyeater children? I wouldn’t know just how to quantify the 2 options-I believe that’s the whole point of this series :) -but I wouldn’t call it much better than what the Superhappy aliens offered, at least with the more “inclusive” altruistic concern that the humans in this illustration are supposed to have.
Carl—I’m pretty sure either way we get three more chapters.
Given that the number of parts in the story has been explicitly stated all along, I doubt it’d change in length.
No, you’ve got to suggest someone else to stun, I’m pretty sure.
One thing I’m wondering about the superhappys. They’re so eager to cooperate, even to the point of changing their own utility function; what would happen if they kept running into one alien race after another, all of which would alter it in the same direction?
I can’t figure out a better solution than what they’ve proposed. I wouldn’t particularily want to eat nonsentient babies—it seems so pointless, by all three pre-existing utility functions—but so is art, by the happyhappyhappys’ function.
Eliezer, if your point is to emotionally drive the point that utility functions are basically arbitrary, you’ve succeeded.
2) …and anesthetized himself.
Umm… ‘Superstimulus’.
I think Eliezer has written passionately and pointedly about rationality, will to become stronger and need for FAI. Writing this story makes a separate point about those ideas.
After reading this story I feel myself agreeing with Eliezer more on his views and that seems to be a sign of manipulation and not of a rationality.
Philosophy expressed in form of fiction seems to have a very strong effect on people—even if the fiction isn’t very good (ref. Ayn Rand). I find this story well written and engaging. I’m having other people read and comment story without background of reading Eliezers writings before to have better idea if this story actually has made a point instead of creating stronger attachment to ideas presented earlier.
Few comments in no particular order (randomized):
Format of the story being released in small bite sized installments creates an artificial scarcity.
The story compactly addresses matters that readers have spend time studying here which is very rewarding.
Engaging people in the creation of the story creates attachment to it.
Characters use very familiar phrases that help formation of in-group feeling.
No matter which of the three alien species one happens to cheer for in the story that is still cheering for someone.
Svein: No, you’ve got to suggest someone else to stun, I’m pretty sure.
I doubt Eliezer’s grand challenge to us would be to contribute less than four bits to his story.
So.. (even taking MST3K into account)
Akon certainly has gone mad. He believes that he is in unique position of power (even his decision markets and his Command staff is divided) and he has to make the decision NOW with great unlikely secrets revealed essentially just to him. There are too many unlikely events to believe in for Akon. I think he has failed his excercise or whatever he is living in.
Anonymous Coward’s defection isn’t. A real defection would be the Confessor anesthetizing Akon, then commandeering the ship to chase the Super Happies and nova their star.
“But our negotiations with them failed, as predicted.”
If the Lady 3rd speaks the truth, and human behaviour is not more difficult to model than Babyeater behaviour, then the crew faces a classical Newcomblike problem. (Eliezer hints through Akon’s thoughts that the Supper Happies have indeed built relieable models of at least some crewmembers.)
So if you write an alternative ending, take into account that whatever the Confessor, or anyone else, does, will have been already predicted and taken into account by the Super Happy People.
Oh, you can believe he’s taken it into account. It’s probably secretly a major plot point or something. He’s Eliezer Yudowsky.
Why should we care for some crystalline beasts? We don’t desire to modify lions to eat vegetables, and their prey is much more like us. Destroy the star immediately, or better do it at the moment when it can do the greatest damage to the damned self-righteous superhappies (revenge is, after all, also a sort of human value).
You’ll get the same next three installments regardless of whether someone comes up with the Alternative Solution before Ending 1 is posted. But only if someone suggests the Alternative Solution will Ending 2 become the True Ending—the one that, as ’twere, actually happened in that ficton.
This is based on the visual novel format where a given storyline often has two endings, the True Ending and the Good Ending, or the Normal Ending and the True Ending (depending on which of the two is sadder).
To make the second ending the True Ending, someone has to suggest the alternative thing for the Impossible to do in this situation—it’s not enough to guess who the Confessor goes after.
Well, I’m glad the story wasn’t ruined by the alternative being too obvious. If no one’s thought of it yet in the comments, then it’s at least plausible that the people on the ship didn’t think of it earlier.
Anonymous—yes, I keep wondering myself about the ethics of writing illustrative fiction. So far I’m coming out on the net positive side, especially after Robin’s post on Near versus Far thinking. But it does seem to put more of a strain on how much you trust the author—both their honesty and their intelligence.
PS: Anna and Steve, Shulman, Vassar, and Marcello, please don’t post the solution if you get it—I want to leave the field at least a little open here...
I thought these “events” might be a test for the humans, a mass hallucination. It is strange that three civilisations should encounter each other at the same time like this.
It is difficult to alter one human characteristic without changing the whole person: difficult to change from male to female. Far more difficult to Improve a civilization by changing one characteristic of the humans, take away the ability to feel pain. Take away the whole basis of moral action and cooperation, by preventing babyeaters from eating babies. Would the superhappies really desire to make every one else just like them? Possibly, I think that is a morally poorer choice but making that choice is very common among humans.
But I cannot think of a better ending.
I wonder if the Superhappys could be persuaded that instead of modifying us to not feel pain at all, we could be modified to have the ability to feel pain switched off by default, but with the potential to be activated if we so chose—that would avoid their concerns about non-consensually inflicting pain on children who hadn’t come to the philosophical realisation that it was worth it, but would still allow us to remain fully human if that was what we actually desired as individuals given the choice.
… and stuns Akon (or everyone). He then opens a channel to the Superhappies, and threatens to detonate the star—thus preventing the Superhappies from “fixing” the Babyeaters, their highest priority. He uses this to blackmail them into fixing the Babyeaters while leaving humanity untouched.
I don’t really have a good enough grasp on the world to predict what is possible, it all seems to unreal.
One possibility is to jump one star away back towards earth and then blow up that star, if that is the only link to the new star.
...and stuns Akon, for failing to be rational and jumping to a decision with insufficient information. Doesn’t it seem a little TOO convenient that the first alien race is less powerful, while the second one is massively more powerful? And now that the one is gone, and the other is dust, humans seem to have accepted being modified in ways that would make the babyeaters happy… without even bringing up any other scenarios. That’s contrary to the stated mission of the Confessor.
The Confessor finds Akon’s acceptance of part of the terms of capitulation flawed, and stuns him, effectively relieving him of command. The rest of the crew deliberate over their options.
Something about Akon’s unwillingness to warn the Babyeaters of the Superhappy’s plans set my “Plot Device” warning lights off. Might the rest of the story involve following the Babyeater’s starline to attempt to warn/renegotiate with them, and, upon probably failing, detonating that sun to protect the Babyeaters (who didn’t choose to capitulate) and consigning humanity to a marathon carnal surplus-infant-eating future?
Not that I don’t struggle to come up with a rational case for this course of action, or even to rationalise it. It’s just that humanity advocating the universal eating of babies is the sort of perverse outcome I’d expect from following alien first principles to their logical conclusions.
Is there also a Scooby Doo ending, like in Wayne’s World?
@Anonymous Coward:Reasonable, except even by defecting you haven’t gained the substantially greater payoff that is the whole point of Prisoner’s Dilemma. In other words, like he asks: what about the Babyeater children?
I misread the story and thought the superhappys had flown off to deal with them first. But in fact, the superhappys are ‘returning to their home planet’ before going to deal with the babyeaters. “This will make it considerably easier to sweep through their starline network when we return.”. Oops.
In any event, if the ship’s crew is immediately anaesthetised and the sun exploded, then earth remains ignorant of the suffering of the babyeaters, and earth is not coerced to have its value system changed by an external superior power. The only human that feels bad about all this is the one remaining conscious human on the ship before it is fried. The babyeaters experience no net change in their position and the superhappys have made a net loss (by discovering unhappiness in the universe and being made unable to fix it). Humanity has met a more powerful force with a very different value system that wishes to impose values on other cultures, but has achieved a draw. Humanity remains ignorant of suffering—again a draw when the only other options are to lose in some way (either by imposing values when we feel we have no right; or by knowingly allowing suffering).
Of course the confessor might wish to first transmit a message back to earth that neglects to mention any babyeaters, and warns of the highly dangerous ‘superhappys’, and perhaps describing them falsely as super-powerful babyeaters (ala alderson scientists) to prevent anyone from being tempted to find them, thereby preventing any individual from sacrificing the human race’s control of it’s own values...
I guess it depends on whether he believes ‘right to choose your own species values’ ranks above ‘right to experience endless orgasms’. If he truly has no preference for either, he might as well consider everyone dangerously highly strung and emotional and an unsuitable sample size to make decisions for humanity. In that case, perhaps he should stun everyone in the control room and cause the ship to return to earth, if he is able to do so, to tell humanity what has happened in full detail. This at least allows the decision to be made by a larger fraction of humanity.
A final practical point. So far, the people on the ship only know what they have received in communications or what they can measure with their sensors. In fact, we can’t trust either of these things; a sufficiently advanced species can fool sensors and any species can lie. We can observe the superhappys are clearly more technologically advanced from the evidence of the one ship present, and the growth rate suggests they can rapidly overpower humanity. Humanity has no idea what the superhappys will really do when they return. In fact, if they wish, they might simply turn all humans into superhappys and throw away all human values, without honouring the deal. They could torture all humans till the end of time if they wish or turn us into babyeaters. Equally, we know there is a race that is pleased to advertise they eat babies and wishes to encourage other races to do the same; and we know that they have one quite advanced ship that is slightly technologically inferior to us; what else they have, we don’t really know. Perhaps the babyeaters have better crews and ships back home. Perhaps the babyeaters have advanced technology that masks the real capabilities of their ship. All we have is a single unreliable sample point of two advanced civilisations with very different value systems. What we have here is a giant knowledge gap.
The only thing we know for certain is that the superhappys are almost certainly technologically superior to humanity and can basically do whatever they want to us; unless the sun is blown up. And we know that the babyeaters have culturally unacceptable values to us; and we don’t know if they might really have the ability to impose those values on us or not. Given this knowledge of these two dangerous forces, one of which is vastly superior, and one of which is advanced and might later turn out to be superior, if humanity can achieve a ‘zero loss outcome’ for itself by blowing up the sun, it is doing rather well in such an incredibly dangerous situation. Humanity should take advantage of the fact the superhappys already placed a ‘co-operate’ card on the table and allowed us decide what to do next.
You guys are very trusting of super-advanced species who already showed a strong willingness to manipulate humanity with superstimulus and pornographic advertising.
They can’t lie. They might change their minds after, in light of new information but whatever they express must be their literal intention as they express it. Their biology does not permit otherwise.
Assuming they aren’t lying about that.
They can’t be. Their thoughts are genetic. If one Superhappy attempted to lie to another, the other would read the lie, the intent to lie, the reason to lie, and the truth all in the same breath off the same allele. They don’t have separate models of their minds to be deceived as humans do. They share parts of their actual minds. Lying would be literally unthinkable. They have no way to actually generate such a thought, because their thoughts are not abstractions but physical objects to be passed around like Mendelian marbles.
… assuming they aren’t lying about how their biology works
Assuming the Lord Pilot was correct in saying that, without the nova star, the Happy Fun People would never be able to reach the human starline network …and assuming it’s literally impossible to travel FTL without a starline …and assuming the only starline to the nova star was the one they took …and assuming Huygens, described as a “colony world”, is sparsely populated, and either can be evacuated or is considered “expendable” compared to the alternatives
...then blow up Huygens’ star. Without the Huygens-Nova starline, the Happy People won’t be able to cross into human space, but the Happy-Nova-Babyeater starline will be unaffected. The Happy People can take care of the Babyeaters, and humankind will be safe. For a while.
Still not sure I’d actually take that solution. It depends on how populated Huygens is and how confident I am the Super Happy People can’t come up with alternate transportation, and I’m also not entirely opposed to the Happy People’s proposal. But:
If I had a comm link to the Happy People, I’d also want to hear their answer to the following line of reasoning: one ordinary nova in a single galaxy just attracted three separate civilizations. That means intelligent life is likely to be pretty common across the universe, and our three somewhat-united species are likely to encounter far more of it in the years to come. If the Happy People keep adjusting their (and our) utility functions each time we meet a new intelligent species, then by the millionth species there’s not going to be a whole lot remaining of the original Super Happy way of thinking—or the human way of thinking, for that matter. If they’re so smart, what’s their plan for when that happens?
If they answer “We’re fully prepared to compromise our and your utility functions limitlessly many times for the sake of achieving harmonious moralities among all forms of life in the Universe, and we predict each time will involve a change approximately as drastic as making you eat babies,” then it will be a bad day to be a colonist on Huygens.
If they’re going to play the game of Chicken, then symbolically speaking the Confessor should perhaps stun himself to help commit the ship to sufficient insanity to go through with destroying the solar system.
Attempting to paraphrase the known facts.
You and your family and friends go for a walk. You walk into an old building with 1 entrance/exit. Your friends/family are behind you.
You notice the door has a irrevocable self-locking mechanism if it is closed.
You have a knife in your pocket.
As you walk in you see three people dressed in ‘lunatic’s asylum’ clothes.
Two of them are in the corner; one is a guy who is beating up a woman. He appears unarmed but may have a concealed weapon.
The guy shouts to you that ‘god is making him do it’ and suggests that you should join in and attack your family who are still outside the door.
The 3rd person in the room has a machine gun pointed at you. He tells you that he is going to give you and your family 1000000 pounds each if you just step inside, and he says he is also going to stop the other inmate from being violent.
You can choose to close the door (which will lock). What will happen next inside the room will then be unknown to you.
Or you can allow your family and friends into the room with the lunatics at least one of whom is armed with a machine gun.
Inside the room, as long as that machine gun exists, you have no control over what actually happens next in the room.
Outside the room, once the door is locked, you also have no control over what happens next in the room.
But if you invite your family inside, you are risking that they may be killed by a machine or may be given 1 million pounds. But the matter is in the hands of the machine gun toting lunatic.
Your family are otherwise presently happy and well adjusted and do not appear to NEED 1 million pounds, though some might benefit from it a great deal.
Personally in this situation I wouldn’t need to think twice; I would immediately close the door. I have no control over the unfortunate situation the woman is facing either way, but at least I don’t risk a huge negative outcome (the death of myself and my family at the hands of a machine gun armed lunatic).
It is foolish to risk what you have and need for what you do not have, do not entirely know, and do not need.
Anonymous Coward’s defection isn’t. A real defection would be the Confessor anesthetizing Akon, then commandeering the ship to chase the Super Happies and nova their star.
Your defection isn’t. There are no longer any guarantees of anything whenever a vastly superior technology is definitely in the vicinity. There are no guarantees while any staff member of the ship is still conscious besides the Confessor and it is a known fact (from the prediction markets and people in the room) that at least some of humanity is behaving very irrationally.
Your proposal takes unnecessary ultimate risk (the potential freezing, capture or destruction of the human ship upon arrival, leading to the destruction of humanity—since we don’t know what the superhappys will REALLY do, after all) in exchange for unnecessary minimal gain (so we can attempt to reduce the suffering of a species whose technological extent we don’t truly know and whose value system we know to be in at least one place substantially opposed to our own, and whom we can remain ignorant of, as a species, by anaesthetised self-destruction of the human ship).
It is more rational to take action as soon as possible to guarantee a minimum acceptable level of safety for humankind and its value system, given the unknown but clearly vastly superior technological capabilities of the superhappys if no action is immediately taken.
If you let an AI out of the box and it tells you its value system is opposed to humanity’s and that it intends to convert all humanity to a form that it prefers, then it FOOLISHLY trusts you and steps back inside the box for a minute, then what you do NOT do is:
mess around
give it any chance to come back out of the box
allow anyone else the chance to let it out of the box (or the chance to disable you while you’re trying to lock the box).
Anonymous
Probably I have watched too much Star Trek but it is hard to shake the suspicion that both the Superhappies and the Babyeaters are sockpuppets for some kind of weakly godlike entity messing around with us for a laff..
“You can’t always get what you want” is one of the few samples of true rationality in this mess, I don’t see why Akon seemed to completely forget that notion.
The happies, humans and babyeaters can not reach a quick conclusion that everyone will be satisfied with. In short, they should all just suck it up. All specieses have their little hiccups from the point of view of the others.
Becoming a painfeeling superhappy babyeater and any other combination imaginable should be a choice made available to members of all races. This might lead somewhere, or it might not. The happies should feel ashamed for their rash reaction (although the babyeaters will forgive them for this “reasonable mistake”) and after that all three races should continue their personal sufferings until they, inevitably, will find a way to come to terms with it, especially given the way they would be exposed to each other culturally in the meanwhile. It’s fair, rational and offers a longterm solution out.
The happy solution is essentially to fight wars until something gives.
Trying to force a solution is hardly rational, which should actually be obvious to all sides, especially the happies. The babyeater society and culture will suffer terrible devastation through a war which they will quickly lose, and I fail to see how putting the rest of the babyeater children through war (in which many will die, painfully) can be called a “definite improvement”. (Many babyeaters would propably just quietly go on eating babies anyway.) It should be completely obvious to the happies that at least a significant part of humanity will not turn itself into baby-eaters willingly, which will result in another war. The same will possibly go for the happies actually, so they also risk a civil war, and for what? To have three cultures that eat babies for purely symbolic reasons. Makes no sense.
Cultural standards that are forced upon people will be rejected both among the humans and the babyeaters. There will inevitably be continuous rebellions, and the happies would keep enforcing their ideas, which will result in a cycle of wars until the happies leave for one reason or the other, and then there will be civil wars among the humans and babyeaters trying to decide for themselves whether to return to the “old ways” or keep the new alien ways (which would have significant support by then).
And that’s just the start of a cycle of wars with three breeds, now on a path of revenge. At this point, the only sensible solution is to go supernova, breaking the connection. There’s better chance of the babyeaters finding a path out of the most significant cultural issue, the babyeating, than there is of the three breeds learning to live together once one starts enforcing itself upon the others in the proposed magnitude. The amount of fighting would clipse the suffering of the babyeater children, thus being pointless.
Blowing the star right now might only be a temporary solution. The happies might find other connections, if they put their minds to it. They obviously travel and develop fast, so it might not even take that long, and they have a lot of data from the humans and babyeaters to work with. With luck the babyeaters might get over their little cultural hiccup before that, but that’s not exactly a sound ethical foundation. (It beats the hell out of starting wars though.)
This war must be stopped before it starts, or at least an attempt must be made (as the humans can’t just force the happies to do anything).
They could attack the happies as a show of “We are willing to die for their right to their values, as much as I loathe them”. It could also remind them that killing is not so much fun once you need to do it to people who are not doing it for “selfish” reasons, and not to people who are just “wrong”. And just as a reminder of what a mess of wars they’re about to create. They could kill themselves. And yes, they could go and blow up the superhappy star as a last resort, hoping that the happies couldn’t recover.
It’s terribly pompous to think that just because all cultures are happy with the way they are now that it somehow makes them superior to the cultures that they had previously, let alones the ones someone else has. We think ourselves superior because our standard of living is improved and we like things the way they are better than what we about the way they were. The only way to compare is to try. We can not try previous cultures, but the happies should, for the sake of argument, at least try living more like the babyeaters. However, if they change EVERYBODY, there is once again no comparison.
If the happies argue that the babyeaters will learn to be satisfied not being babyeating, then by the same reasoning the happies should be able to learn to be satisfied to eat babies, or at least live with the idea of the babyeaters being babyeating.
So to return to the point; options of trying differents aspects of the three cultures should be made available, and it could propably be agreed that members of each specieses must be found that are willing to go for this experiment. (It shouldn’t be too hard really; all volunteers would be doing it so hopefully others wouldn’t.) The happy technology should even make it possible to complete this experiment to satisfactory levels surprisingly quick.
There’s a chance this experiment wouldn’t produce satisfactory results, but it should be tried before warfare.
Yeah, I know, terribly boring in this topic, but what ever.
“You can’t always get what you want” is one of the few samples of true rationality in this mess, I don’t see why Akon seemed to completely forget that notion.
The happies, humans and babyeaters can not reach a quick conclusion that everyone will be satisfied with. In short, they should all just suck it up. All specieses have their little hiccups from the point of view of the others.
Becoming a painfeeling superhappy babyeater and any other combination imaginable should be a choice made available to members of all races. This might lead somewhere, or it might not. The happies should feel ashamed for their rash reaction (although the babyeaters will forgive them for this “reasonable mistake”) and after that all three races should continue their personal sufferings until they, inevitably, will find a way to come to terms with it, especially given the way they would be exposed to each other culturally in the meanwhile. It’s fair, rational and offers a longterm solution out.
The happy solution is essentially to fight wars until something gives.
Trying to force a solution is hardly rational, which should actually be obvious to all sides, especially the happies. The babyeater society and culture will suffer terrible devastation through a war which they will quickly lose, and I fail to see how putting the rest of the babyeater children through war (in which many will die, painfully) can be called a “definite improvement”. (Many babyeaters would propably just quietly go on eating babies anyway.) It should be completely obvious to the happies that at least a significant part of humanity will not turn itself into baby-eaters willingly, which will result in another war. The same will possibly go for the happies actually, so they also risk a civil war, and for what? To have three cultures that eat babies for purely symbolic reasons. Makes no sense.
Cultural standards that are forced upon people will be rejected both among the humans and the babyeaters. There will inevitably be continuous rebellions, and the happies would keep enforcing their ideas, which will result in a cycle of wars until the happies leave for one reason or the other, and then there will be civil wars among the humans and babyeaters trying to decide for themselves whether to return to the “old ways” or keep the new alien ways (which would have significant support by then).
And that’s just the start of a cycle of wars with three breeds, now on a path of revenge. At this point, the only sensible solution is to go supernova, breaking the connection. There’s better chance of the babyeaters finding a path out of the most significant cultural issue, the babyeating, than there is of the three breeds learning to live together once one starts enforcing itself upon the others in the proposed magnitude. The amount of fighting would clipse the suffering of the babyeater children, thus being pointless.
Blowing the star right now might only be a temporary solution. The happies might find other connections, if they put their minds to it. They obviously travel and develop fast, so it might not even take that long, and they have a lot of data from the humans and babyeaters to work with. With luck the babyeaters might get over their little cultural hiccup before that, but that’s not exactly a sound ethical foundation. (It beats the hell out of starting wars though.)
This war must be stopped before it starts, or at least an attempt must be made (as the humans can’t just force the happies to do anything).
They could attack the happies as a show of “We are willing to die for their right to their values, as much as I loathe them”. It could also remind them that killing is not so much fun once you need to do it to people who are not doing it for “selfish” reasons, and not to people who are just “wrong”. And just as a reminder of what a mess of wars they’re about to create. They could kill themselves. And yes, they could go and blow up the superhappy star as a last resort, hoping that the happies couldn’t recover.
It’s terribly pompous to think that just because all cultures are happy with the way they are now that it somehow makes them superior to the cultures that they had previously, let alones the ones someone else has. We think ourselves superior because our standard of living is improved and we like things the way they are better than what we about the way they were. The only way to compare is to try. We can not try previous cultures, but the happies should, for the sake of argument, at least try living more like the babyeaters. However, if they change EVERYBODY, there is once again no comparison.
If the happies argue that the babyeaters will learn to be satisfied not being babyeating, then by the same reasoning the happies should be able to learn to be satisfied to eat babies, or at least live with the idea of the babyeaters being babyeating.
So to return to the point; options of trying differents aspects of the three cultures should be made available, and it could propably be agreed that members of each specieses must be found that are willing to go for this experiment. (It shouldn’t be too hard really; all volunteers would be doing it so hopefully others wouldn’t.) The happy technology should even make it possible to complete this experiment to satisfactory levels surprisingly quick.
There’s a chance this experiment wouldn’t produce satisfactory results, but it should be tried before warfare.
Yeah, I know, terribly boring in this topic, but what ever.
Humm, sorry.
Insofar as definitions can be right or wrong, so also counterfactual consequences can be right or wrong, and thus fictional evidence can be right or wrong…
So the rightnesses of the two bodies of fictional evidence in the two endings both depend on the audience’s skill at applied metaethics? And you want to increase the expected rightness of the true ending by correlating the true ending with the audience’s unknown skill? Or by giving the audience an incentive to increase their skill?
(I don’t know the solution. This comment reasoning about your motives is to narrow the search space. Plus it proposes a meaning for your otherwise unexplained term “True”.)
Go back to earth and detonate. Obviously these two species are superior, having not destroyed themselves when they had the means. They should remain uncorrupted.
Since this is fiction (thankfully, seeing how many might allow the superhappy’s the chance they need to escape the box)… an alternative ending.
The Confessor is bound by oath to allow the young to choose the path of the future no matter how morally distasteful.
The youngest in this encounter are clearly the babyeaters, technologically (and arguably morally).
Consequently the Confessor stuns everyone on board, pilots off to Baby Eater Prime and gives them the choice of how things should proceed from here.
The End
They should go back to colony system Huygens and detonate.
Meanwhile, the Arabs and the Jews, communicating through the exclusive channel of the Great Khalif O. bin Laden negotiating through Internet Sex with Tzipi Livni, arrived at this compromise whereby the Jews would all worship Mohammed on Fridays, at which times they will explode a few of their children in buses, whereas the Arabs would ratiocinate psychotically around their scriptures on Saturdays, and spawn at least one Nobel prize winner in Medecine and Physics every five years.
The chance of running into two alien species in one day seems pretty unusual. Perhaps it means something?
The chance of running into two alien species in one day seems pretty unusual. Perhaps it means something?
That is precisely what makes me think they are sockpuppets of a single entity (even within the story Universe, not just in the sense that Elizier invented them).
The chance of running into two alien species in one day seems pretty unusual. Perhaps it means something? .
That is precisely what makes me think they are sockpuppets of a single entity (within the story Universe, not just in the sense of having both been created by Elizier).
Nova-ing the star isn’t IMO a guarantee of no future contact—there may be other starlines that aren’t discovered yet. Also, the SuperHappies may improve their tech over time, and may find ways of no longer needing starline tech.
Also, if there are three civilizations, odds are there are a lot more. The SuperHappies have a better structure to compete and grow with whatever other galactic superpowers exist out there.
In essence, the “closed locked door” is an illusion in my mind. Not something to base strategy on. It is the kind of thing that primitive 21st century humans would think of, and not the kind of option that an advanced 26th century human should consider viable. Were I the Confessor (and by implication, that is the role we 21st century readers are supposed to play), I would zap the Engineer, because he’s building a house made of straw and taunting the big bad wolf.
But in the context of the story as it stands, this option is pointless, since the commander has already made his decision. Zapping the pilot is equally pointless, unless no one else is able to move the ship. That may be a defect of the story, or it may be deliberate.
Option 1 is to cooperate, so I guess option 2 is defect. The correct way to defect is to destroy Huygens.
Of course, meeting two new species on the same day is the crew of the Impossible having its leg pulled by some superior entity, namely Eliezer. But Eliezer is not above and outside our world, and we don’t have to let ourselves intimidated by his scripture.
Why and how would communication possibly happen through only one channel? Since when is the unit of decision-making a race, species, nation, etc., rather than an individual? Is this Market-driven spaceship under totalitarian control where no one is allowed to communicate, and the whole crew too brain-damaged to work-around the interdiction? I wonder how the Soviet Union made it to the Interstellar Age. Where has your alleged individualism gone?
Why and how is compromise is even possible between two species, much less desirable? In the encounter of several species, the most efficient one will soon hoard all resources and leave the least efficient ones but as defanged zoo animals, at which point little do their opinions and decisions matter. No compromise. The only question is, who’s on top. Dear Tigers, will you reform yourself? Can we negotiate? Let your Great Leader meet ours and discuss man to animal around a meal.
And of course, in your fantasy, the rationalist from way back when (EY) effectively wields the ultimate power on the ship, yet is not corrupted by power. What a wonderful saint! Makes you wonder what kind of wimps the rest of mankind has degenerated into to submit to THAT wimpy overlord. Where has gone your understanding of Evolutionary Forces?
Wanna see incredibly intelligent people wasting time on absurd meaningless questions? Come here to Overcoming Bias! A stupid person will believe in any old junk, but it takes someone very intelligent to specifically believe in such elaborate nonsense.
The nova acted as a rendezvous signal, causing all starlines connected to that star to flare up. Otherwise it’s too hard to find aliens—opening starlines is expensive. It’s the chance of a direct encounter (small) versus chance of at least one mutual neighbor (larger).
And while I’m at it—confusing pleasure and happiness is particularly dumb. Entities that would do that would be wiped from existence in a handful of generations, and not super-powerful. Habituation is how we keep functioning at the margin, where the effort is needed. The whole idea of a moral duty to minimize other people’s pain is ridiculous, yet taken for granted in this whole story. Eliezer, you obviously are still under the influence of the judeo-christian superstitions you learned as a child.
If you’re looking for an abstract value to maximize, well, it’s time to shut up and eat your food. http://sifter.org/~simon/journal/20090103.h.html
From the fact that the physicists covered up knowledge that they thought was too dangerous for humanity to possess, the crew should immediately deduce that this could have happened several times in the past regarding several topics. The most obvious topic is AGI, so they should search their Archive for records of AGI projects that seemed promising but were mysteriously discontinued.
The nova acted as a rendezvous signal, causing all starlines connected to that star to flare up. Otherwise it’s too hard to find aliens—opening starlines is expensive. It’s the chance of a direct encounter (small) versus chance of at least one mutual neighbor (larger).
Even so, for reasons of which you are very well aware, meeting two sets of aliens should be a lot less likely than meeting one set, so we ought to take that in account when we are trying to make sense of what is going on. But I accept that positing a minor god is rather a primitive reaction, especially as we already know that in your impossible possible world no Singularity is reachable by any means currently envisaged.
“Carl—I’m pretty sure either way we get three more chapters.”
Yes, but I was more worried that we’d only get three more chapters...;-)
Anyway, another reason that the confessor should interfere in this process is because they are awful at bargaining. If they follow through with the deal they will be (initially) seriously depressed about having to kill their own children, there’s the risk of war or oppression of those who do not want to be augmented, and what to they get in return from the happies?
“We are willing to change to desire pleasure obtained in more complex ways, so long as the total amount of our pleasure does not significantly decrease. We will learn to create art you find pleasing. We will acquire a sense of humor, though we will not lie. From the perspective of humankind and the Babyeaters, our civilization will obtain much utility in your sight, which it did not previously possess. This is the compensation we offer you.”
I don’t see the value in this; if one wants more entertaining art and jokes, why not simply accept the augmentation and come up with them yourselves?
Given that the first installment mentions that Akon’s words would be “inscribed for all time in the annals of history”, any internally consistent conclusion would have to feature some subsequent contact with humanity.
Pardon my lapse in fourth-wall etiquette.
What about following the SuperHappies to their first hop, then making THAT star go supernova? That way, they’re cut off, but the humans still have a small chance to ‘save’ the babyeaters. Or vice-versa.
Peter, destroying Huygens isn’t obviously the best way to defect, as in that scenario the Superhappies won’t create art and humor or give us their tech.
Pain and pleasure are signals that we are on the wrong or right path. There’s a point in making it a better signal. But the following propositions are wholly absurd:
to eliminate pain itself (i.e. no more signal)
to bias the system to have either more or less pain in the average (i.e. bias the signal so it carries less than 1 bit of information per bit of code).
to forcefully arrange for others to never possibly have pain in their own name (i.e. disconnecting them from reality, denying their moral agency—and/or obey their every whims until reality strikes back despite your shielding).
to feel responsible for other people’s pain (i.e. deny the fact that they are their own moral agents).
As for promising a world of equal happiness for all, shameless self-quote: “Life is the worst of all social inequalities. To suppress inequalities, one must either resurrect all the dead people (and give life to all the potential living people), or exterminate all the actually living. Egalitarians, since they cannot further their goal by the former method, inevitably come to further it by the latter method.”
A rational individual has no reason to care for the suffering of alien entities, or even other human entities, except inasmuch as it affects his own survival, enjoyment, control of resources.
Within the confines of the story:
No star that has been visited by starline has ever been seen from another, which implies a vastly larger universe than can be seen from a given lightcone. Basically, granting the slightly cryptographic assumption that travel between stars is impossible.
The weapon is truly effective: works as advertised.
Any disagreement with that would have to say why “”“ ‘Assume there is no god, then...’ “But there is a god!” “”” fallacy doesn’t apply here.
The threat of a nova feels like a more interesting avenue than the mere detonation.
I’m not planning to trust anyone. My suggestion was based on the assumption that it is possible to watch what the Superhappys actually do and detonate the star if they start heading for the wrong portal. If that is not the case (which depends on the mechanics of the Alderson drive) then either detonate the local star immediately, or the star one hop back.
Wanna see incredibly intelligent people wasting time on absurd meaningless questions? Come here to Overcoming Bias! A stupid person will believe in any old junk, but it takes someone very intelligent to specifically believe in such elaborate nonsense.
One can only wonder what that might imply about those wise folk who have recognised all of this as nonsense, yet continue to read and even respond to it.
Anonymous
I’m not planning to trust anyone. My suggestion was based on the assumption that it is possible to watch what the Superhappys actually do and detonate the star if they start heading for the wrong portal.
---
Once you know someone has technologies vastly ahead of your own, you might as well assume they can do your worst nightmares—because your imagination and assumptions are unlikely to present limits to their capabilities.
Imagine a group of humans circa 800 A.D. making assumptions about how they will be tracked down by a team of modern day soldiers with advanced communications, GPS, satellite imagery, airborne drones, camouflaged clothing, accurate weapons, poison gas, … and those soldiers aren’t even biologically or intellectually more advanced.
If I were the humans, I’d report back to earth (they have valuable information), then send out a robotic probe through the Alderson drive and blow up the star.
The humnans in this story know that there are at least two alien cultures, and the culture shock from them is too much to deal with. If there are more cultures, it will be worse.
Another possibility would be to blow yup Earth’s sun. This fragments the human species, but increases the probability that some branches of humanity will survive.
Oh boy,
I do not care if anyone creates art, but i do care if sentient beings are hurt.
The Babyeater way of living is basically like a social accepted Gulag, only worse.
And evidentially the Happy see humanity the same way.
Now what i also don’t like is collectivism. Even the super happies seem rather singleminded, and pretty willing to make decisions for their whole species.
Now despite not fully understanding super happy ethics, and not trying to break the story my proposal would be:
The superhappies offer the living Babyeaters to change them, and will nevertheless rescue each and every baby from being eaten. Then these kids get the choice to return home at any time later (no idea, if they would be accepted) or live with da happies, while also being offered treatment/change for their condition.
[Readers should be aware, that with some searching it would be possible to find human cultures with similar ethics in the past. Think samurai, or holy warriors]
The same solution also works perfect for the humans. Offer treatment, protect the kids.
The happies might be able to accept pain that lasts only seconds, but will prevent any form of child abuse.
Now that sounds like an awful lot of work, but i think the happies might be able to pull it off, and of course its the only ethical thing to do that i can think off.
The alternative of killing sentient beings is cruel, no matter what.
Martin
The deal Akon reached with the super happies is so preposterously one-sided it is no surprise at all the babyeaters did not agree to it- and that could have been foreseen. For either humans or the babyeaters to even consider destroying their identity so the super happies will make art and jokes is absurd. For people, at least, self-identity is vastly more important then overall utility. Super happy art and jokes are worth basically nothing to the babyeaters and humans. If the super happies want humans to switch off physical pain, embarrassment etc. they should agree to 1. unconditional sharing of every technological advancement they make, 2. allow the individual adult humans the option of turning pain etc. back on, 3. do our baby eating for us. But thats just a suggestion- the main problem is that the chances of the super happies nailing the fairest possible deal on their first guess is astoundingly unlikely. Even with complete knowledge of human and babyeater culture their knowledge is phenomenologically inadequate for coming up with a deal that is actually fair to all. Not negotiating was irrational, as was failing to contact the babyeaters to get their thoughts on the deal before agreeing to it- three-party deals require three-party negotiations.
That the Confessor didn’t step in sooner… is kind of ruining the story for me. I’m not sure if these issues were brushed aside to make your point or if you really don’t understand how absurd this deal is.
Stop the superhappies’ ship before it jumps out! They must not learn of humanity’s existence. Use the Alderson drive if necessary.
First, with regards to the solution proposed by the superhappies, my thought would have been, right at the start, this:
Accept IF they can ensure the following: For us, the change away from pain doesn’t end up having indirect effects that, well, more or less screw up other aspects of our development. ie, one of the primary reasons why humanity might have been very cautious in the first place with regards to such changes.
With regards to the business of us changing to more resemble babyeaters, can they simultaneously ensure that the eaten children will not have, at any point, been conscious? And can they ensure that the delayed consciousness (not merely self awareness, but consciousness, period) doesn’t negatively impact, in other ways, human development?
Further, can they ensure that making us, well, babyeater like does NOT otherwise screw with our sympathy and compassion?
IF all of the above can be truly answered “yes”, then (in my view) the price that humanity would pay would not really be all that bad.
Of course, we have to then ask about the changes to the babyeaters? Presumably, the ideal would be something like “delay onset of consciousness until after the culling (and not at all, of course, for those that are eaten)”, but in such a way that intelligence and learning is still there, and when the babyeater becomes conscious, it can integrate data and experience acquired while it was not conscious.
But, a question arises, a possibly very important one: Should the Superhappies firing on the Babyeater ship be considered evidence that Superhappies are Prisoner’s Dilemma defectors?
If yes, then how much can we trust the Superhappies to actually implement the solution they proposed, rather than do something entirely different? And THAT consideration would be perhaps the only consideration (I can think of so far) for really considering the “blow up a star to close down the paths leading to humanity’s worlds” option (post Babyeater fix, perhaps))
If the humans know how to find the babyeaters’ star,
and if the babyeater civilization can be destroyed by blowing up one star,
then I would like to suggest that they kill off the babyeaters.
Not for the sake of the babyeaters (I consider the proposed modifications to them better than annihilation from humanity’s perspective)
but to prevent the super-happies from making even watered down modifications adding baby-eater values -
not so much to humans, since this can also be (at least temporarily) prevented by destroying Huygens -
but to themselves, as they are going to be the dominant life form in the universe over time, being the fastest growing and advancing species.
Of course, relative to destroying Huygens the price to pay in terms of modifications to human values is high, so I would not make this decision lightly.
Is this story self-consistent? Consider that:
(i) it’s easy to make stars go nova.
(ii) when a star goes nova, its Alderson lines disappear, disconnecting parts of the network from each other, and stopping a war if the different sides are no different parts of it (the fact that the network is sparce is important here)
(iii) both Babyeaters and the Superhappies know this
(iv) nevertheless the Superhappies still plan to prosecute a war against the babyeaters
Well. I guess that stunning the Pilot is a reasonable thing to do, since he is obviously starting to act anti-socially. That is not the point though. Two things strike me as a bit silly, if not outright irrational.
First is about the babyeaters. Pain is relative. In case of higher creatures on earth, we define pain as a stimuli signaling the brain of some damage to the body. Biologically, pain is not all that different from other stimuli, such as cold or heat or just tactile feedback. The main difference seems to be in that we, humans, most of the time, experience pain in a highly negative way. And that is the only point of reference we know, so when humans say that babyeater babies are dying in agony they are making some unwarranted assumptions about the way babies percieve the world. After all, they are structurally VERY different from humans.
Second is about the “help” humans are considering for babyeaters and superhappies are considering for both humans and babyeaters. Basically by changing the babyeaters to not eat babies or to eat unconscious babies, their culture, as it is, is being destroyed. Whatever the result, the resulting species are not babyeaters and babyeaters are therefore dead. So, however you want to put it, it is a genocide. Same goes for humans modified to never feel pain and eat hundreds of dumb children. Whatever those resulting creatures are, they are no longer human either biologically, psychologically or culturally and humans, as a race, are effectively dead.
The problem seems to be that humans are not willing to accept any solution that doesn’t lead to the most efficient and speedy stoppage of baby eating. That is, any solution where babyeaters will continue to eat babies for any period of time is considered inferior to any solution where babyeaters will stop right away. And the only reason for this is because humans are feeling discomfort at the thought of what they perceive as suffering of babies. In that aspect humans are no better then superhappies, they would rather genocide the whole race then allow themselves to feel bad about that race’s behavior. If humans (and hopefully superhappies) stop to be such prudes and allow other races rights to make their own mistakes, a sample solution might lie in making the best possible effort to teach babyeaters human language and human moral philosophy, so they might understand human view on the value of individual consciousness and human view on individual suffering and make their own decision to stop eating babies by whatever means they deem appropriate. Or argue that their way is superior for their race, but this time with full information.
… but relative to simply cooperating, it seems a clear win. Unless the superhappies have thought of it and planned a response.
Of course, the corollary for the real world would seem to be: those people who think that most people would not converge if “extrapolated” by Eliezer’s CEV ought to exterminate other people who they disagree with on moral questions before the AI is strong enough to stop them, if Eliezer has not programmed the AI to do something to punish that sort of thing.
Hmm. That doesn’t seem so intuitively nice. I wonder if it’s just a quantitative difference between the scenarios (eg quantity of moral divergence), or a qualitative one (eg. the babykillers are bad enough to justifiably be killed in the first place).
Let’s make a bit of summary.
Similarities: Each species considers suffering, in general, negative utility. Each species considers survival very high in utility. (Though at least some humans consider the possibility of sacrificing their species for the others’ benefit, so this is not necessarily highest in value.) Each species has a kind of âfunâ that’s compatible with the others’, and that’s high in utility. They are all made of individuals, reproduce sexually, can communicate among themselves and at least somewhat compatibly with the others.
Differences:
* crystal pogo-sticks:
- this appears to indicate that they have some equivalent of empathy for other species
- have other “compatible pleasures” with humans, e.g. living & eating, reproduction, and art;
- but consider suffering of winnowed children acceptable (indeed, good) because it is useful for the existence and evolution of their species (the main selective pressure); so the existence and evolution of their species is considered to have massive positive utility. The relationship appears hard-wired in their thinking processes due to natural evolution (because that’s how evolution worked for them).
- avoid their suffering, and that of other species’
- this is not conditioned on the other species’ eating of their children: they tried to âhelpâ humans adopt children-eating although humans don’t already do it; therefore, they assign positive utility to other species’ utility independently of whether or not they eat their children. Also, they didn’t instantly kill the humans, even though they could have had at the start.
- appear to be very good team players as a species, even hard-wired for that. In fact, this appears to be the top of their value pyramid.
* noisy bipeds:
- enjoy various pleasures, like living & eating, reproduction, and art and humor;
- avoid their own suffering, and that of others (empathy); this is hard-wired into their brains, as a survival mechanism. But they consider low-level suffering (of children and adults) acceptable (indeed, good) because: it is useful for the existence of their species (learning to avoid things with unpleasant consequences); natural evolution hard-wired biped’s brains to like the results of suffering (this goes as far as valuing more something obtained effort-fully than the same thing obtained effortlessly); in the ancestral environment, many useful things could not be obtained without some suffering, so a complex system of trade-offs evolved in the brain.
- much of their team-playing is rational: they have instincts to cheat, and those are rationally countered if an unpleasant outcome is anticipated (though anticipation is also influenced by cooperative instincts; the rational part has at least some part in balancing them).
* happy tentacly lumps:
- avoid suffering; no explicit indication why, presumably evolved as in the other two species.
- have empathy; this might be evolved or engineered, not clear; but it’s not an absolute value, if we trust their statement that they’re willing to alter it if it causes them unavoidable suffering.
- don’t seem to assign any value to suffering, however.
- like happiness a lot, but this doesn’t seem to be the absolute core value: they’ve not short-circuited their pleasure centers. So there must be something higher: experiencing the Universe? Liking happiness was probably originally evolved (it’s a mechanism of evolution), but might have been tampered with then.
- they seem rational team-players, too: it promises more future happiness rather than less future suffering.
*
I’m a bit less versed in the Prisoner’s Dilemma than I suspect most here are, so I’ll summarize what I understand. There’s supposed to be, for each âplayerâ, the best personal outcome (everyone else cooperates, you cheat), the worst personal outcome (you cooperate, everyone else cheats) and the global compromise (everyone cooperates, nobody gets the bad outcome). I suppose in more that two players there are all sorts of combinations (two ally and cooperate, but collectively cheat against the other); I’m not sure how relevant that is here, we’ll see. In real situations there are also more than two options, even with just two players (like the ultimatum game, you may “cheat” more or less). There’s also another difference between the game and reality: in real life you may not really know the utility of each outcome (either because you mis-anticipated the consequences of each option, or because you don’t know what you really want; I’m not sure if these two mean the same thing or not).
Let’s see the extreme options. â+â means what each species considers the best outcome and â-â means what it considers worst if each species defects (as far as I can tell).
* crystal pogo-sticks:
+ everyone starts having a hundred children and eating them just before puberty.
- they are forced to keep living and multiplying, but prevented from eating their children; they don’t even want to eat them, the horror!
- same as above, but they’re also happy about it and everything else.
* noisy bipeds:
+ they keep living and evolving as they do now; the crystal pogo-sticks stop eating self-aware children and are happy about it; and the happy tentacly lumps keep being happy and help everyone else being as happy as they want; either they start liking âusefulâ suffering or they stop empathizing with suffering of people who do want it.
- everyone starts having a hundred children and eating them just before puberty.
- everyone stops suffering and tries to be as happy as possible, having sex all the time. The current definition of âhumanityâ no longer applies to anything in the observable Universe.
* happy tentacly lumps:
+ everyone stops suffering and tries to be as happy as possible, having sex all the time. Horrible things like the current âhumanityâ and âbaby-eatersâ no longer exist in the observable Universe :))
- everyone starts having a hundred children and eating them just before puberty.
- humans keep suffering as much as they want, and keep living and evolving as they do now; the crystal pogo-sticks stop eating self-aware children and are happy about it, but may keep as much suffering as the humans believe acceptable; and they themselves keep being happy, help everyone else being as happy as they want, and start liking âusefulâ suffering.
This doesn’t mean necessarily that each outcome is actually possible. As far as I can tell from the story, only the happy tentacles can actually cheat that way. The worst that humans can do from the tentacle’s POV is start a Dispersion: run back and start jumping randomly between stars, destroying the first few stars after jumping. Depending on who they want to screw most, they may also destroy the meeting point, and/or send warning and/or advice to the crystal pogo-sticks. I think the pogo-sticks can do the same (it appears from the story that the star-destroying option is obvious, so they could start a Dispersion, too). This wouldn’t prevent problems forever, but it would at least give time to the Dispersed to find other options.
The âcompromiseâ proposed by the happy tentacly lumps doesn’t seem much worse than their best option, though: the only difference I can see is that everyone starts eating unconscious children. (I don’t see why they wouldn’t try humor and more complex pleasures anyway: they haven’t turned themselves into orgasmium, so they presumably want to experience pleasurable things, not pleasure itself.) I don’t understand crystalline psychology well enough, but it seems pretty close to the worst-case scenario for them. And it’s actually a bit worse than the worst-case tentacle-defecting scenario for the humans.
The tentacly lumps may think fast, but it seems to me that either they don’t think much better, or they’re conning everyone else. They’re in quite a hurry to act, which is suspicious a bit:
OK, it’s reasonable that they’re concerned about the crystalline children. But they also know that the other species have trouble thinking as fast as them, and there’s another option that I’m surprised nobody mentioned:
As long as everyone cooperates, everyone can just agree to temporarily stop doing whatever the others find unacceptable, and use the time to find more solutions, or at any case understand the solution others propose. They may find each other âvarelseâ and start a war, but I see no reason for any species to do it _rightnow, even if they know they’d win it. (This assumes they all cooperate in the Prisoner’s Dilemma as a matter of principle, of course.)
*
While the crystal pogo-sticks and the noisy bipeds won’t much enjoy putting a temporary stop to having children (say, a year or a decade, even a century), I don’t see why having the âhappy tentacly compromiseâ _rightnow would be higher in their preference ordering, since apparently nobody ages significantly. Even a temporary stop to disliking not having children doesn’t seem a problem (none of the three species seem inclined to reproduce unlimitedly, so they must have some sort of reproductive controls already beyond the natural ones). The happy tentacly lumps are carefully designed in the story to not have any unwanted attributes themselves except that they want and can transform the other species without their will. The humans (and myself) seem to consider their private habits, as far as they shared them, merely a bit boring relative to others, and the crystalline pogo-sticks seem to consider not eating children dis-likable but acceptable in other species, at least temporarily (since they didn’t attack anyone). So the only compromise they’d have to do is temporarily stop empathizing with small amounts of suffering (i.e., that of the other species not having children during the debate) and not forcibly convert them until afterwards.
As far as I can tell after a day of thinking, the result of the debate would include the crystalline pogo-sticks understanding that not eating children and cooperating are compatible in other species (they do have the concept of âmistaken hypothesisâ, and they just got a lot more data than they had before; also they didn’t instantly attack a species that never eats its children), and also accept some way of continuing their way of life without eating conscious children. Depending on the reproduction (& death, if applicable) rates of each species, and their flexibility, it might even be technically possible to let them reproduce normally, but modify their children such that they don’t suffer during the winnowing, and the eaten ones become a separate non-reproducing species voluntarily.
As for the humans, from my reading of the story I understand that the happy tentacly lumps mostly object to involuntary human suffering, i.e. the children. They don’t like the voluntary suffering, but it doesn’t seem to me they’d force the issue on adults. So they should at least accept letting the existing adults decide if they want to keep their suffering, such as it is. I don’t find it unacceptable a compromise where children get to grow up without any suffering they don’t want, especially (but not necessarily) if the growing is engineered so that the final effect is essentially the same (i.e., they become as ânormalâ humans and accept suffering in âusualâ circumstances, even if they didn’t grow up with it). Of course, we’re psychologically closer to the Confessor than to the rest of the humans in the story, so what we consider acceptable is as irrelevant as his to what decision they’d take.
The happy tentacly lumps might have simply anticipated all this, and decided on the best outcome they want. (In case they’re really really smart and practically managed to simulate the others species.) This would explain why they didn’t propose the above, but would make the story moot. In that case the situation is somewhat analogue to an AI in the box, except that you can’t destroy the box nor the AI inside, you can only decide to keep it there. My decision there would be to put as big a pile of locks as I can on the box, and hope the AI can’t eventually get out by itself. The analogue of which would be Dispersion. (But the analogy is not an isomorphism: the AI is in an open box right now, and it doesn’t seem to try to jump out, i.e. it didn’t blow up the human ship yet, which is why the story is still interesting.)
Go back to Earth and detonate. It will mean the end of the civilization they know, but the Superhappys will still hunt the survivors down with 2^2^2^2 ships, and will force an equitable compromise for each surviving pocket of humanity, each of which will make the whole more human then they would be with just one compromise with humanity.
I just can’t figure out who the Confessor will shoot, or if he will just threaten, to make it happen. And I want to read both endings.
If the supperhappies are more advanced than us, then shouldn’t they know the true value of the strong nuclear force, and thus know that blowing up the star is an option?
The Superhappies’ decision seems reasonable. I am not sure what alternative solution might be. Hrm.
Dmitry, concerning genocide, I believe you are anthropomorphizing a culture. “Babyeater culture” is not a person. Eliminating the culture is not a crime if performed by non-murderous means; consider an alternative “final solution” of using rational arguments and financial incentives to convince Jews to discard Judaism.
Perhaps the act of forcible biological modification to prevent criminal behavior is wrong (e.g. chemical castration for child molesters), but it isn’t the same as a murder.
What is giving some people the impression that saying, “no,” was an option? I mean, they could have turned down the compromise, but unless they had something to offer right then, that would have meant instant death (and then the compromise would be implemented anyway). “Yes” means the humans are not defecting right now, while it is (pointlessly) suicidal.
Chris, I don’t think I am wrong in this. To give an analogy (and yes, I might be anthropomorphizing, but I still think I am right), if someone gives me a lobotomy, I, Dmitriy Kropivnitskiy, will no longer exist, so effectively it would be murder. If Jews are forced to give up Judaism and move out of Israel, there will no longer be Jews as we know them or as they perceive themselves, so effectively this would be genocide.
I am not certain I understand the terms of the puzzle. Should the audience come up with a better ending, a more plausible ending, or an ending which works better as story? And if we fail at this task, will we still get to know the other ending you had in mind?
Humanity could always offer to sacrifice itself. Compare the world where humanity compromises with both the Babyeaters and the Super Happy, versus one where we convince them to not compromise and instead make everybody Super Happy.
Of course, I’m just guessing, since I’m not a Utilitarian.
The Super Happies hate pain, and seeing others in pain causes them to experience pain. Humans tolerate pain better than the Super Happies do. This gives the humans a weapon to use against them, or at least negotiating leverage. They can threaten to hurt themselves unless the Super Happies give them a better deal.
(So, in order to unlock the True Ending, do we have to come up with a way for the humans to “win” and get what they want, alien utility functions be damned, or should we take the aliens’ preferences into account too?)
(Long time lurker—first post)
The course I would suggest, if on the IPW, would be to rally the Human fleet to set up a redundant and tamper-resistant self-destruct system on the newly-discovered star—with a similar system set up at the Human colony one jump further back.
When the Super-Happys return, we would give them the option:
1. Altering their preferences to align with Human values, at least enough so that they would no longer consider changing Humans without their full consent.
2. Immediately detonating the star—so they would no longer be able to rescue the Baby-Eater’s Babies.
Any other course of action, or attempting to tamper with the self-destruct would trigger the self-destruct (and perhaps that on the next Human Colony in case they prevented the first nova).
We would offer volunteers to join the Super-Happys, in order to explore the feasibility and desirability of further harmonization. (and also monitor their compliance with the agreement… and steal as much technology as possible).
I say this as an unabashed defender of the superiority of Human values, who is willing to use our native endowment of vicious craftiness to defend and promote those ideals.
Akon clearly lost his mind, so the Confessor should anesthetize him. He does not need to break his oath and take the command of the ship. Instead he can just point out some obvious things to the rest. Such as that it would be crazy to blackmail Supperhappies using a single ship with no communication to the rest of humanity. Or that interest rates need not fall through the floor the way Akon was trying to convince them, but instead would rise by the similar amount. Or what Cabalamat pointed out. I am only not sure what ending does this lead to.
This was a failed negotiation. The fact that the babyeaters rejected the superhappy proposal means it is not symmetric. It is not a compromise that fair babyeaters would propose if they were in the superior position.
That the superhappies proposed it and then ignored evidence that it was unacceptable, is evidence that the superhappies are not being as fair as Akon seemed to think they were. It is obvious that they are not sacrificing their value system as much as they are requiring the babyeaters to. They are pushing their own values on the babyeaters because they CAN, not because they are offering a balanced utility exchange. They are likely doing the same to us.
They view the babyeater situation as dire enough that they are willing to enact modifications without acceptance. They gave humankind a general proposal that they predicted humankind would accept. They COULD just make modifications, but part of their value system includes getting human acceptance.
I’m not sure, but I think the humans should threaten/ go to war with them, So they make no more modifications except those that they think they MUST make. That’ll be my guess. Stun the captain, go to war.
I’m not sure what the babyeater’s current stance says about how much they’ve considered the possibility that they will encounter superpowered babyeaters in the future.
Dmitry, if someone destroys your brain or alters it enough so that it is effectively the brain of a different person, that is indeed murder. Your future utility is lost, and this is bad. Forcing you to behave differently is not murder. It may be a crime (slavery) or it may not be (forcing you to not eat your children), but it is not murder.
Genocide (as I understand the term) is murder with the goal of eliminating an identifiable group. It is horrific because of the murder, not because the identifiable characteristics of the group disappear.
My understanding is that preventing babyeating will be done in such a way as to minimize harm done to adult babyeaters, and only if such harm is outweighed by the utility of saving babyeater children. It is vastly different than genocide; the goal is to prevent as much killing as possible, not eliminate the babyeating aliens.
Incidentally, my hypothetical “final solution” is actually a Pareto improvement: every Jew who converts does so because it increases his/her utility.
I would guess that the True Ending involves the Confessor stunning Akon. The aliens used every trick in the book to influence the humans. They communicated using real-time video instead of text transmissions. They gave speeches perfectly suited to tug on people’s emotional levers. Since the Superhappies run at an accelerated rate, this also forced Akon to respond before he could fully process information.
I would almost say Akon’s mind has been hacked. Akon had very little time to think before accepting the Superhappy terms and he currently seems resigned to the destruction of humanity. He uses “negotiations” to describe the Superhappy ultimatum. Anyway, he’s probably not fit to lead the ship. The Pilot hasn’t had a mental breakdown, he’s just (understandably) outraged at what’s going on. If the stunner is only used in the case of mental breakdown, the Pilot will have to be stopped by other means. Once a new leader is elected/promoted/whatever, the Confessor should require all real-time communication from the Superhappies to be text-only.
The Superhappies may be technologically superior, but their weakness is the fact that they don’t separate genes from memories. They also don’t withhold information from each other. This could allow a specially-crafted memory to disrupt or destroy the entire race. Even the kiritsugu are shocked by the slightest display of suffering, so it’s not much of a stretch to say some images exist that would permanently traumatize all Superhappies.
Of course, destruction isn’t the goal, modifying is. Before the Superhappies leave, the humans should ask to stay in contact with one Superhappy ship during Operation Babyeater. By studying them more, the humans could find a way to insert a memory that changes Superhappies to be less of a threat. If the humans have the upper hand, they can actually decide whether or not to adopt superhappiness instead of having the choice forced on them.
If it doesn’t work, at least the humans will know how big the Superhappy armada is. They could wait for the Superhappies to return from Babyeater territory and blow up the system. The babies would be saved and humanity would be safe until the next nova.
Full cooperation is not one of the scenarios I outlined, since most humans would not want to become Superhappy. As the Confessor said, “You have judged. What else is there?”
Does anyone else have suspicions about the “several weeks” timeframe that the Lady 3rd has given for the transforming of the Babyeaters?
What can the Superhappies do in several weeks, regardless of their hyper-advanced and hyper-advancing technology? I suspect not much other than kill off most of the species. A quick genocide will decrease more suffering on the long run than an arduous peaceful solution.
Genocide seems even more likely since lady 3rd told Akon that his decision would be identical to other human decision makers.
The Babyeaters of the ship decided not to cooperate and they were destroyed. The rest of the decision makers of the Babyeaters will not cooperate and will have to be destroyed (in the mind of the Lady 3rd).
So at this point, the Confessor shocks the Administrator and they allow the Superhappies to go on with their genocide of the Babyeaters. Unavoidable and humanity would have done a very similar thing anyway. Then destroy the star and go back to Earth to prepare to meet the Superhappies again in a few decades or so (since their progress is a few orders of magnitude faster, humans can easily expect to see them again uncomfortably soon). Preparations would include eliminating suffering and such so that a new war would be avoided after the next meeting. Why on earth haven’t they eliminated pain anyway? :)
The Informations told/implied to the Humans that they don’t lie or withold information. That is not the same as the Humans knowing that the Informations don’t lie.
I’m beginning to suspect this is a trick question. Well, sort of.
If the situation were reversed, how would you answer? If the technologically advanced Babyeaters had offered a one-sided “compromise” and then destroyed the primitive Superhappy ship when they refused?
The strong aliens have demonstrated their willingness to defect in a prisoner’s dilemma type situation while the weak ones cooperated. That suggests we should cooperate with the weak ones and defect against the strong ones. I don’t think the particulars of their moral systems should override that.
Prisoner’s Dilemma has been prominent enough in the story that Akon’s failure to appreciate the implications of the defection seems like a severe lapse of judgement. The Confessor stuns him and the remaining crew reconsiders the situation.
Eliezer’s novella provides a vivid illustration of the danger of promoting what should have stayed an instrumental value to the the status of a terminal value. Eliezer likes to refer to this all-too-common mistake as losing purpose. I like to refer to it as adding a false terminal value.
For example, eating babies was a valid instrumental goal when the Babyeaters were at an early state of technological development. It is not IMHO evil to eat babies when the only alternative is chronic severe population pressure which will eventually either lead to your extinction or the disintegration of your agricultural civilization with a reversion to a more primitive existence in which technological advancement is slow, uncertain and easily reversed by things like natural disasters.
But then babyeating became an end in itself.
By clinging to the false terminal value of babyeating, the Babyeaters caused their own extinction even though at the time of their extinction they had an alternative means of preventing an explosion of their population (particularly, editing their own genome so that fewer babies are born: if they did not have the tech to do that, they could have asked the humans or the the Superhappies for it).
In the same way, the humans in the novella and the Superhappies are the victims of a false terminal value, which we might call “hedonic altruism”: the goal of extinguishing suffering wherever it exists in the universe. Eliezer explains some of the reasons for the great instrumental value of becoming motivated by the suffering of others in Sympathetic Minds in the passage that starts with “Who is the most formidable, among the human kind?” Again, just because something has great instrumental value is no reason to promote it to a terminal value; when circumstances change, it may lose its instrumental value; and a terminal value once created tends to persist indefinitely because by definition there is no criterion by which to judge a system of terminal values.
I hope that human civilization will abandon the false terminal value of hedonic altruism before it spreads to the stars. I.e., I hope that the human dystopian future portrayed in the novella can be averted.
Geoff: “They also don’t withhold information from each other. This could allow a specially-crafted memory to disrupt or destroy the entire race.”
This is not Star Trek, my Lord.
Note that the kiritsugu as depicted through Cultural Translater versions 2 and 3 doesn’t show any shock at humans being stressed; that depiction only appears in version 16. As such, it seems likely that this depiction is not based on the kiritsugu’s actual emotional state, but rather added to better allow humans to communicate with ver.
I do. Pain is painful in “beasts” too. What does it matter if they are made of crystals, are hairy or whatever?
Chris, continuing with my analogy, if instead of lobotomy, I was forced to undergo a procedure, that would make me a completely different person without any debilitating mental or physical side effects, I would still consider it murder. In case of Eliezer’s story, we are not talking about enforcement of a rule or a bunch of rules, we are talking a permanent change of the whole species on biological, psychological and cultural level. And that, I think, can be safely considered genocide.
The humans, Babyeaters, and Superhappies were attracted by the nova. They were all eager to meet aliens. The Babyeaters and the Superhappies have the means to create supernovae artificially. They should be able to create ordinary novae too. This would be a good way to meet aliens. Why haven’t they tried that?
Peter—I am, sadly, not an astrophysicist, but it seems reasonable that such an act would substantially decrease the negentropy available from that matter, which is important if you’re a species of immortals thinking of the long haul.
Peter, being able to blow up a whole star (a process that is obviously going to involve some kind of positive feedback cycle) is not the same as being able to start novas. A nova is not a detonation of a star. A nova is the detonation of a shell of hydrogen that has accumulated from a companion and compressed on the surface of a degenerate star (white dwarf).
I had asked why the Babyeaters and Superhappies have not intentionally created novae. But now I think it’s pretty likely that the Babyeaters actually caused the nova. The Babyeaters were in the system first, despite being the least technologically advanced race, and despite having made special preparations for the hostile environment (the mirror shielding). If they had come in response to the nova, they probably would have been the last to arrive.
We know an Alderson drive can cause a supernova. We should consider the possibility that the original nova wasn’t just a coincidental rendezvous signal, but was intentionally created by the superhappys. Of course this assumes that Alderson drives are just as good for creating a nova as a supernova.
I missed Eli’s reply before my most recent post. Although he hasn’t said that the Babyeaters can’t induce a nova, I’m lowering my probability that they did.
What if the Superhappys created the Babyeaters and the supernova? The baby eaters wouldn’t really eat babies, they wouldn’t even really exist. And seeing the baby eaters would make humans more apt to compromise when they shouldn’t. http://en.wikipedia.org/wiki/Argument_to_moderation
So shoot the hypnotized Captain.
2. … and anesthetized the entire crew, at which point he proceeded to have nonconsensual sex with every person aboard the ship. When in Rome...!
Z. M. Davis: OK, well it may not be self-replicating but it was worth a shot. Extreme empathy is basically the only weakness the Superhappies have. I’m not a big Star Trek fan, so I haven’t seen the first two episodes you linked to and I only vaguely remember the last one.
51a1fc26f78b0296a69f53c615ab5a2f64ab1d1e: Or early versions of the translator failed to convey the humans’ stress to the Superhappies. The kiritsugu are rather isolated from the rest of the crew, so while they have knowledge of the Babyeaters, maybe they haven’t seen the videos. It would be analogous to reading about the Holocaust versus stepping into a holodeck depicting a concentration camp. Yes, I’m assuming aliens have a bias similar to humans. If that’s not the case, then all non-kiritsugu Superhappies will be grief-stricken for quite some time after hearing about the Babyeaters. There would also have to be a very good reason why kiritsugu lack an emotion/reaction found in the rest of their kind. Humans without empathy are autistic or psychopaths. Again I’m arguing from a human analogy, but removing an emotion can completely change a being (http://www.overcomingbias.com/2009/01/boredom.html).
Anyway, most of my speculation is probably wrong, but the main point I tried to make in my previous post is that Akon’s leadership is seriously compromised. The Superhappies are very manipulative and the Confessor needs to get a handle on things before saving humanity gets any tougher.
Did I mention a holodeck? Ugh, curse you Star Trek.
Another question:
Do the Super Happies already know where the human worlds are (from the Net dump), or are they planning on following the human ship back home?
As noted earlier, the Superhappies don’t appear to be concerned about the presumed ability of the Babyeaters to make supernovas. Perhaps they have a way of countering the effect, and have already injected anti-supernova magicons through the starline network back to Earth and Babyeater Prime. In that case trying to detonate either immediately or at Huygens would fail, while eliminating any trust the Superhappies had in us. Maybe that’s not much worse; they wouldn’t punish us for the attempt, it might just make them more aggressive about fixing us.
Also, is the cosmology such that the general lack of visible supernovas is significant? It would seem that the normal development for “human-like” technological civilizations is that shortly after discovering the Alderson drive, a mad scientist or misguided experiment blows up the home star. Babyeaters and Superhappies apparently avoided this by having some form of a singleton, and humans got lucky because the scientists were able to suppress the information. Humans may be the most individualistic technological civilization in the universe.
I’m surprised the Super Happy People are willing to allow pre-sentient Baby Eaters to be eaten. Since they do not distinguish between DNA and synaptic activity, they might regard the process of growing a brain as a type of thought and that beings with growing brains are thus sentient.
It seems we are at a disadvantage relative to Eliezer in thinking of alternative endings, since he has a background notion of what things are possible and what aren’t, and we have to guess from the story.
Things like:
How quickly can you go from star to star?
Does the greater advancement of the superhappies translate into higher travel speed, or is this constrained by physics?
Can information be sent from star to star without couriering it with a ship, and arrive in a reasonable time?
How long will the lines connected to the novaing star remain open?
Can information be left in the system in a way that it would likely be found by a human ship coming later?
Is it likely that there are multiple stars that connect the nova to one, two or all three alderson networks?
And also about behaviour:
Will the superhappies have the system they use to connect with the nova under guard?
How long will it be before the babyeaters send in another ship? the humans, if no information is received?
How soon will the superhappies send in their ships to begin modifying the babyeaters?
Here’s another option with different ways to implement it depending on the situation (possibly already mentioned by others, if so, sorry):
Cut off the superhappy connection, leaving or sending info for other humans to discover, so they deal with the babyeaters at their leisure.
Go back to give info to humans at Huygens, then cut off the superhappy connection.
Go back to get reinforcements, then quickly destroy the babyeater civilization (suicidally if necessary) and the novaing star (immediately after the fleet goes from it to the babyeater star(s), if necessary).
In all cases, I assume the superhappies will be able to guess what happened in retrospect. If not, send them an explicit message if possible.
The crew certainly hasn’t fully considered the implications of returning home at this point. Akon touched on it, humanity is something of an impostor as far as “not-stupid” space-faring species go. Whatever the outcome, the Lord Pilot has proven that humanity is capable of fragmenting and going to war over this issue. If the Impossible Possible World returns to human space then people will have both the motive to fight each other, and now the means to destroy themselves now that the Alderson cat is out of the bag.
Moreover it’s quite possible that the “negotiations” were performed under a false premise. it seems likely that the Superhappies are aware of the capability of the Alderson drive, but they very well may not be aware of humanity’s ignorance. The records sent to the Babyeaters were censored, particularly of technical data, so the coupling constant error might not have been there. Moreover Lady 3rd does seem remarkably sure that human decision-makers will make consistent decisions. This could be a result of assuming humans have dealt with the Alderson drive filter like the Babyeaters and Superhappies who are then both much less diverse than humans.
All that boils down to the idea that the Superhappies may be assuming that the modification of humanity will be a cooperative venture when in fact it is likely to be a species destroying war.
I guess the real question is does humanity thus constitute a “species in circumstances that do not permit its repair”?
It is possible that, like they Babyeater’s attempt to convince the crew to eat their own babies, the Superhappy’s arguments are also rationalizations when their most fundamental purpose is really eliminating their own pain. Thus they might be willing to trade away eliminating all human pain if modifying their own empathic faculty turns out to be the less painful-to-them alternative.
As to what the Confessor is going to do? I’m not sure. He could stun whomever is in control of the communications (Lady Sensory?) and transmit the relevant information to the Superhappies. He may believe the upside result to be better terms and the downside result to be the Superhappies enacting their compromise by force. He could feel that downside to be only slightly worse than voluntary implementation and significantly better than sparking of a potentially human destroying war.
Of course they may want to employ the slitting their own throats option in either case given the Alderson secret and humanity’s posited issues.
Mind you to me there seem to be so many risks that I’d have to assume some special insight due to his commonality with Lady 3rd in order to be confident of making such a gamble on his own. It seems the risk adverse action would be to go back to Huygens and detonate the star the moment the crew arrive. This would kill every colonist, but would save the Babyeater’s babies and prevent humans from learning how to destroy themselves (for the moment at least).
The lesson I draw from this story is that in it, the human race went to the stars too soon. If they had thought more about situations like this before they started travelling the starlines, they’d have a prior consensus about what to do.
Fiction like this may be the nearest thing to a way to avoid such a blunder. Occasionally a pundit says “Nobody has ever given any thought to the consequences of biotechnology,” as if sf didn’t exist, so I’m not hopeful.
Superhappies care about the happiness of all. Therefore humans can blackmail them by parking ships near stars in inhabited solar systems, and threatening to supernova the star in case a Superhappy ship jumps into the system, or whatever. (The detonation mechanism should probably be automated so that Superhappies can be certain humans actually go through with the detonation, instead of just making the threat.)
For humanity to maximize their influence over the future, they should immediately proceed to set up as many “booby-trapped” solar systems as they can. Then renegotiate with the Superhappies (and also spend more time thinking).
(I only had the patience to read the first page of comments, so I don’t know if there was already talk of this option.)
I misread it just as Anonymous Coward did. I thought they killed the Babyeaters and head back on their (the Babyeaters) star line. Thus I thought AC’s first solution was perfect. I also liked AC’s second solution.
It seems far too possible that the Super Happies are not telling the whole story here. For all Akon and his crew know, virtually everything the SH have told them is a lie crafted to minimize their resistance so that the SH can either enslave them or destroy them and take over their starline network. Several of the SH’s actions have been fairly suspicious—for example, they seemed awfully quick to give up on diplomacy with the Babyeaters. Also, “several weeks” seems like an implausibly short amount of time for even a highly advanced technological force to pull off the kinds of changes the SH have planned for the BE. Conversely, it does not seem as implausible a time frame for destroying/enslaving the BE. Suspicious, I tell you. The facts fit either hypothesis about equally, except that I’m not entirely convinced that a species which truly eliminated pain would survive twenty years.
The fact that Akon and the gang don’t appear to have even considered this possibility leads me to agree with the Confessor’s apparent conclusion that the crew is no longer sane.
At this point, humanity has totally failed at interstellar diplomacy. The SH know about humanity now, and at the rate they are developing, they will probably be able to find us within a few centuries even if the Impossible destroys the local star. The only acceptable solution I see is to follow the SH ship and hope that they can find a nova-able star in/near the SH home system.
mcow, I might also assign a rather high probability to the Superhappies having just made everything up as a lie, were it not for their choice to blow up the Babyeaters while leaving the humans to return home.
Their behaviour is consistent with what they have claimed to be, but not with e.g. being standard invading aliens that are just lying.
Demetri:”Chris, continuing with my analogy, if instead of lobotomy, I was forced to undergo a procedure, that would make me a completely different person without any debilitating mental or physical side effects, I would still consider it murder.”
Do you also consider it a birth?
@Aleksei
For all we know, the Babyeaters don’t exist. That entire scenario may have been invented by the Super Happies just to make the humans easier to manipulate. If so, it certainly worked pretty well, don’t you think? Also, remember Akon’s thoughts on seeing the BE solution to withstanding the radiation:
I’m really beginning to think that the SH are predators who caused this nova to use it as bait. The BE are there because the SH have found that it sets up a nice prisoner’s dilemma situation for a large variety of meta-ethics systems. The target comes in, and the BE ship immediately transmits a fabricated archive (or maybe a real archive; the BE may have existed at some point). The SH know that tit for tat is a highly effective (and therefore probably common) strategy, and therefore their prey will feel obligated to send data back to the BE. The SH then use this data to determine exactly how to manipulate the prey—for humans, they used “super happy”, but had they been preying on, by way of example, Vulcans, they would have used “ultimate understanding” or something. Meanwhile, the prey wear themselves out trying to figure out what to do about the BE. After a day or so, the prey are ripe for the pickin’.
Maybe I’m wrong, but if there’s even a 1% chance that I’m right, I don’t think the crew of the Impossible can take the risk.
Dmitry, we are in agreement that a sufficiently large mind altering change is a killing.
But in principle, changing babyeater society does not require the killing of even a single babyeater. Simply keep unmodified adult and child babyeaters separate, and modify the pre-sentient children to prevent them from wanting to eat babies in the future. No sentient being is killed/modified, although freedom of movement/action is restricted.
In practice, modifying babyeater society would probably involve more bloodshed. But as long as this bloodshed is minimized as much as possible, and is less than the harm caused by babyeating, I don’t see it as genocide. Is it genocide to kill some Nazi’s as part of an effort to stop them from killing innocents?
mcow, I still think the SH should be blowing the humans up at this point, if that’s what they’re about. I don’t see that they’d really gain anything by still keeping up such a supposed illusion, since they’ve already determined the humans to be technologically inferior. They could go for a surprise attack on human civilization now, and it would be at least as defenseless as in the scenario where they let the human ship return with false news of non-invading aliens (news which everyone would not believe).
Aleksei, what they’d gain from keeping up the illusion is the knowledge of where the rest of humanity is. They have to coordinate their attack so that the humans don’t have a chance to cut off the rest of their starline network. If the humans figure out what’s up, they can mess up the short-term plans of the SH, even if they can’t win an all out war.
Eliezer, you might do well to thoroughly understand and consider Fare’s criticisms of you. He seems to be one of the few readers here who isn’t in awe of you, and has therefore managed nailed some fundamental mistakes you’re making about human values. Mistakes that have been bugging me for some time too, but that I haven’t been able to articulate, possibly because I’m too enchanted by your cleverness.
We don’t value strangers on par with our friends and family, let alone freaky baby-eating or orgy-having aliens. Furthermore, I don’t want to be altered in such a manner as to make me so empathetic that I give equal moral standing to strangers and kin. I believe THAT would make me less human. If you or an FAI alters me into such a state, you are not correctly extrapolating my volition, nor of who knows how many other people like me. Do you have an estimate of how many people like that there are? How did you come by such an estimate?
So anyway, if this happened in any real future, I have no doubt some star would soon get supernova’d—current star, Huygens, Happy Homeworld, Eater Homeworld, or Sol, in that order of likelihood. For these idealized humans inhabiting the uncanny valley of empathy that creates the whole contrived dilemma in the first place, who knows? Maybe the fact that a nova was what brought them there, and now they’re contemplating creating a supernova is some kind of clue. Maybe the definition of “non-sentient baby” can be stretched to the point where the story ends as a blowjob joke, but I doubt it. Also, the mechanics of exactly how other people’s pain affects the Happies haven’t really been examined. It sounds like they’re merely extrapolating the pain they think others must be feeling… given that they’ve had no scruples against engineering other sources of discomfort out of existance, why not engineer that out of existance too?
“It sounds like they’re merely extrapolating the pain they think others must be feeling… given that they’ve had no scruples against engineering other sources of discomfort out of existance, why not engineer that out of existance too?”
They already said they will engineer out the sympathy and replace it with a non-painful desire to alleviate pain.
One possibility, given my (probably wrong) interpretation of the ground rules of the fictional universe, is that the humans go to the baby-eaters and tell them that they’re being invaded. Since we cooperated with them, the baby-eaters might continue to cooperate with us, by agreeing to:
1. reduce their baby-eating activities, and/or
2. send their own baby-eaters ship to blow up the star (since the fictional characters are probably barred by the author from reducing the dilemma by blowing up Huygens or sending a probe ship), so that the humans don’t have to sacrifice themselves.
The Ship’s Confessor uses the distraction to anesthetizes everyone except the pilot. He needs the pilot to take command of the starship and to pilot it. The ship stays to observe which star the Superhappy ship came from, then takes off for the nearest Babyeater world. They let the Babyeaters know what happened, and tell them to supernova the star that Superhappies came from at all costs.
When everyone wakes up, the Ship’s Confessor convinces the entire crew to erase their memory of the true Alderson’s Coupling Constant, ostensibly for the good of humanity. He pretends to do so himself, but doesn’t. After the ship returns to human space, he uses his accumulated salary to build a series of hidden doomsday devices around every human colony, and becomes the dictator of humanity through blackmail. Everyone is forced to adopt an utility function of his choosing as their own. With every resource of humanity devoted to the subject, scientists under his direction eventually discover a defense against the supernova weapon, and soon after that, the Babyeaters are conquered, enslaved, and farmed for their crystal brains. Those brains, when extracted and networked in large arrays, turn out to be the cheapest and most efficient computing substrate in the universe. These advantages provide humanity with such a strong competitive edge, that it never again faces an alien that is its match, at least militarily.
Before the universe ends in a big crunch, the Confessed (humanity’s eventual name) goes on to colonize more than (10^9)^(10^9) star systems, and to meet and conquer almost as many alien species, but the Superhappy people are never seen again. Their fate becomes one of the most traded futures in the known universe, but those bets will have to remain forever unsettled.
In case it wasn’t clear, the premise of my ending is that the Ship’s Confessor really was a violent thief and drug dealer from the 21th century, but his “rescue” was only partially successful. He became more rational, but only pretended to accept what became the dominant human morality of this future, patiently biding time his whole life for an opportunity like this.
Peter:
Option 1 is to cooperate, so I guess option 2 is defect. The correct way to defect is to destroy Huygens.
But this is the inverse of how cooperate/defect was framed earlier in the story. When humans had the upper hand, defecting was to blow up the baby eater ship, and proceed to fix the babyeaters in some way. Cooperating was to stay peaceful and manage some mutual compromise. For some reason the humans are pretending that the superhappies haven’t defected.
Paperclippers arrive. The end.
Peter:
Option 1 is to cooperate, so I guess option 2 is defect. The correct way to defect is to destroy Huygens.
It’s not how it works, prisoner’s dilemma is just an example of what you may choose based on timeless decision theory. There are numerous options, and “true cooperation” is just a stand-in for the optimal decision that takes into account the effect of your decision procedure on the outcome. In the described situation, lying to Superhappies through confused Akon and then blowing up Huygens is the best analogue to cooperation in true prisoner’s dilemma. You only need to give something up if it’s the best way to get what you want. Otherwise it’s not about decision-making, but specific utility, for example valuing fairness.
Of course, if both side were more rational, it’d be better to use the fact that you can blow up Huygens confessor-style, without the risk of being destroyed when Superhappies learn about this option, but giving leverage to negotiate terms that are even better than the story’s True ending (for us, and also for Superhappies, which is why they should have a protocol in place that would prevent them from destroying the Impossible in this case).
For gosh sakes Faré it’s only a story.
Sure it’s a story, but one with an implicit idea of human terminal values and such.
I’m actually inclined to agree with Faré that they should count the desire to avoid a few relatively minor modifications over the eternal holocaust and suffering of baby-eater children.
I originally thought Eliezer was a utilitarian, but changed my mind due to his morality series.
(Though I still thought he was defending something that was fairly similar to utilitarianism. But he wasn’t taking additivity as a given but attempting to derive it from human terminal values themselves—so if human terminal values don’t say that we should apply equal additivity to baby-eater children, and I think they don’t, then Eliezer’s morality, I would have thought, would not apply additivity to them.)
This story however seems to show suspiciously utilitarian-like characteristics in his moral thinking. Or maybe he just has a different idea of human terminal values.
I wrote up my main criticism here: “Denying moral agency to justify Cosmic Sacrifice”
I wish there were fourth ship of civilization that has huge negative utility for irreversible decisions/actions.
god damn this is annoying. why is “let the babyeating ailens alone to have their reprehensible cultural practices the way they want them and expect them to do the same to us” not an option? obviously their ideas of what constitutes moral behavior differ from our own but that’s true of any culture.
One truth that is universal, relevant, and not taken into account, is that demanding everyone converge on the same utility function is suboptimal, probably even when evaluated from that utility function. To restate in terms of culture: If you convinced everybody around the world to wear blue jeans, drink Coke, and listen to rock and roll, I guarantee that 60 years in the future, your grandkids would get less enjoyment out of their world, even if they still love jeans, Coke, and rock and roll. Diversity is needed to generate memes very much like it is needed to generate genes.
Does not follow. We value diversity, but not diversity of utility functions. This allows diversity because some terms in the utility function are subjective (i.e. they depend on the agent’s brain). If I have “people should drink tasty beverages” as a terminal value, I have “MixedNuts should drink mango juice” and “Helen should drink tea”, and if a cosmic ray alters Helen’s brain, the latter will change. Helen.tasty is not an approximation of a mysterious Platonic essence of absolute tastiness, unlike Helen.prove_theorem.
(Maybe we also want some diversity in terminal values, but it’ll probably be very small—we aren’t going to sacrifice the galaxy to cheesecake makers.)
The narrative suggests that empathy is bad, in the long run; and only truly selfish races are capable of getting along with each other.
The superhappies don’t realize that we are not complete “symmetrists”. There is no need for them to acquire a sense of humor or anything like that. They don’t need to change themselves at all, from the perspective of human morality.
Why don’t the humans point this out? They might be able to get a better deal this way...maybe humans could get rid of only bodily pain and relief from the most severe anxiety and depression, along with receiving untranslatable 2, while the Superhappies remain unmodified. That seems like a good deal for both humans and supperhappies...I personally would love to get some of that untranslatable 2 and do not see the value in extreme negative states, and while I hate to lose bodily pain entirely it is certainly not central to what makes us human.
It also works out between superhappies and baby-eaters, since the children will not feel extreme anxiety upon being eaten, and yet a sacrifice of a sentient being is still made. Superhappies don’t seem to care about painless destruction of sentient beings...they are hedonists, pleasure/pain utilitarians.
Humans and babyeaters will still have a problem with each other, of course, because of the destruction of sentience involved. We’ll have to work that one out separately.
The superhappies don’t realize that we are not complete “symmetrists”. There is no need for them to acquire a sense of humor or anything like that. They don’t need to change themselves at all, from the perspective of human morality.
Why don’t the humans point this out? They might be able to get a better deal this way...maybe humans could get rid of only bodily pain and relief from the most severe anxiety and depression, along with receiving untranslatable 2, while the Superhappies remain unmodified. That seems like a good deal for both humans and supperhappies...I personally would love to get some of that untranslatable 2 and do not see the value in extreme negative states, and while I hate to lose bodily pain entirely it is certainly not central to what makes us human.
It also works out between superhappies and baby-eaters, since the children will not feel extreme anxiety upon being eaten, and yet a sacrifice of a sentient being is still made. Superhappies don’t seem to care about painless destruction of sentient beings...they are hedonists, pleasure/pain utilitarians.
Humans and babyeaters will still have a problem with each other, of course, because of the destruction of sentience involved. We’ll have to work that one out separately.
I haven’t read all comments here, so this might have already been said. First things first. SH aliens can go get a hike. Once you go into space you either abandon your morality almost altogether, or keep it, and If you keep it, you probably aren’t going to be very happy if someone would try to change it. So if you still haven’t abandoned it, you might just as well do everything in your power to stop anyone from changing it. Now we need to figure out a way to trick SH into accepting our morality and abandoning theirs. What do we know about them? We know that they:
are technologicaly 200+years stronger than us
can’t lie
can’t perceive lies that well
want to have pleasure all the time
apparently can’t tolerate pain being anywhere in the universe
are very willing to compromise
Since they are technologicaly more advanced than we are, we can’t fight them openly. Open confrontation will end up in us being dead(or converted). What we can do, however, is information warfare. What I propose is this:
Confessor threatens to stun/stuns pilot
humans contact SH ship. Hopefully it still haven’t moved out of the system.
we ask SH for an aproximate number of them in the universe, to better help us judge wherever their proposed solution is the best one(obviously solution should be weighted in favor of the civilisation with more people)
they can’t lie, so they tell us
HOPEFULLY we still haven’t told them how many humans are there. If we didn’t, we lie, and say that there are 10-100 more humans than SH. If we did(by telling baby-eaters), we say that that information was incorrect, and make up some reasons for sending BE wrong info.
And then we bluff, and say that if Impossible wouldn’t return, or if SH wouldn’t convert to our morality, humanity would kill every human in the most horrible and painfull way imaginable.
since SH aren’t very good with lying (and we will make sure to let pilot(if he wasn’t stunned), or someone else(like sensory) who would mostly believe this bluff speak) they should believe us, and, since they have zero leverage on us(threatening to blow up our ship wouldn’t work, for example) and they do not seem to hesitate, they should convert
rinse and repeat for all SH civilisation
I know I’m way late to the party, but I still gave it a good long think, considering all aspects of the problem before trying to envision a solution. What seems the best solution to me is to return along their own starline, use the several weeks of delay given to evacuate as many people as possible from the connecting colony Huygens (which from the intro is the colony the ship came from through the starline), and then destroy that system, cutting humanity off from the Super Happy Fun aliens, at least for now.
Okay, I’m sorry, I know I am a horrible human being that is a product of its time and can’t comprehend the societal macroevolution of his own species but DAMN IT, these people are way, way, way too ethical for anything human-like to be seen upon them. They literally stopped celebrating the effective salvation of their entire species for the youth of a species that causes little to no stimuli liable to evoke sympathy in evolutionary terms. And I know I’m shallow, and I know we’d all love to think we’re more decent as a whole than tojudge our intergalactic neighbors by looks alone but if the aliens in question looked like insectoid crystals, I’m sorry, we would care little more about them than we care about the black widow spiders eating their own husbands or, hell, as meerkats do to their young when times get tough. Sure, no conscience, that is an exceptional argument to make that in a perfect world would end the debate there… on paper. In reality we cannot help it but be extremely affected by such radical changes in outward appearance… sigh, thank Eliezer for the confessor. I’m identifying so much right now
No.
It starts out very good and funny and interesting but then you make one of the most common mistakes of creative writing, forgetting that alternatives and communication exist. I like your writing style and most of the ideas you have, but overall that is really not enough.
You want to show how humans allegedly are similar to baby eating aliens in the way of forcing the standards of the adults onto children or the standards of the advanced society on those of the less advanced society but fail to do so because obviously these things are too different.
But mainly, the problem is that the humans do not even try to argue their point to the advanced aliens who evidently completely misunderstand humans.
You try to equate eating babies and making them suffer for a prolonged time, outside factors against the children’s will and ultimately causing death to simple discomforted that is largely voluntary because human society cannot function with the kind of life the advanced aliens expect and not adhering to these standards is only met with rebukes, not violence and not death. It is an obvious point but to force a plot development your characters do not even think of such obvious things and also imply that the advanced aliens are hellbent on forcing their standards on humans when they are still at a stage of merely discussing the change of society and you show them to be still very much open to dialog with humans.
PS: showing legalized rape in a positive light without proper explanation is jarring and absurd and does nothing to improve the story, it simply utterly destroys suspension of disbelief while making you really question not just the characters in your story or the human society they live in but also the writer himself. It does not work as simple example of how society changes over time to accept things unacceptable before, especially not you bring up an argument of far less painful and negative things being unacceptable to the aliens.
Seriously, lay off the rape hentai. Consensual hentai is far better in ever aspect.
That aside, besides the rape thing (completely unnecessary, if you just want to show how society evolves to broaden its morals over time there are more than enough other things to use) the main issue is that you force the confrontation only because the characters cannot think of arguments that should be more than obvious to them considering their supposed skills and experience and knowledge.
That is something that can easily break a story in two and I just had to stop reading because it is an obvious sign that the story is not thought through well enough and what follows can never again suspend disbelief because the story is now completely compromised. Once there has been such huge and central evidence of the writer forcing an outcome without even hinting at the alternatives being explored, especially when the alternatives are obvious, the rest of the story can never again be separated from the idea of the writer making things happen for his own convenience instead of for the story to make sense. From then on it becomes impossible to let even minor issues with the storytelling slide because disbelief can no longer be suspended.
This is especially bad in your case because you are trying to convey morals, whether it’s what you belief or you just think it makes for a nice thought experiment is irrelevant, which just fall apart under any kind of scrutiny.
I had to stop reading because now that the story stopped making sense it just comes across as some kind of inconsequential and poorly thought through tirade about shifting morals in human society.
PS: all the issues aside, that is a pretty interesting setting.
When there are difficult decisions to be made, I like to come back to this story.