True Ending: Sacrificial Fire (7/8)
(Part 7 of 8 in “Three Worlds Collide”)
Standing behind his target, unnoticed, the Ship’s Confessor had produced from his sleeve the tiny stunner—the weapon which he alone on the ship was authorized to use, if he made a determination of outright mental breakdown. With a sudden motion, his arm swept outward -
- and anesthetized the Lord Akon.
Akon crumpled almost instantly, as though most of his strings had already been cut, and only a few last strands had been holding his limbs in place.
Fear, shock, dismay, sheer outright surprise: that was the Command Conference staring aghast at the Confessor.
From the hood came words absolutely forbidden to originate from that shadow: the voice of command. “Lord Pilot, take us through the starline back to the Huygens system. Get us moving now, you are on the critical path. Lady Sensory, I need you to enforce an absolute lockdown on all of this ship’s communication systems except for a single channel under your direct control. Master of Fandom, get me proxies on the assets of every being on this ship. We are going to need capital.”
For a moment, the Command Conference was frozen, voiceless and motionless, as everyone waited for someone else do to something.
And then -
“Moving the Impossible now, my lord,” said the Lord Pilot. His face was sane once again. “What’s your plan?”
“He is not your lord!” cried the Master of Fandom. Then his voice dropped. “Excuse me. Confessor—it did not appear to me that our Lord Administrator was insane. And you, of all people, cannot just seize power—”
“True,” said the one, “Akon was sane. But he was also an honest man who would keep his word once he gave it, and that I could not allow. As for me—I have betrayed my calling three times over, and am no longer a Confessor.” With that same response, the once-Confessor swept back the hood -
At any other time, the words and the move and the revealed face would have provoked shock to the point of fainting. On this day, with the whole human species at stake, it seemed merely interesting. Chaos had already run loose, madness was already unleashed into the world, and a little more seemed of little consequence.
“Ancestor,” said the Master, “you are twice prohibited from exercising any power here.”
The former Confessor smiled dryly. “Rules like that only exist within our own minds, you know. Besides,” he added, “I am not steering the future of humanity in any real sense, just stepping in front of a bullet. That is not even advice, let alone an order. And it is… appropriate… that I, and not any of you, be the one who orders this thing done—”
“Fuck that up the ass with a hedge trimmer,” said the Lord Pilot. “Are we going to save the human species or not?”
There was a pause while the others figured out the correct answer.
Then the Master sighed, and inclined his head in assent to the once-Confessor. “I shall follow your orders… kiritsugu.”
Even the Kiritsugu flinched at that, but there was work to be done, and not much time in which to do it.
In the Huygens system, the Impossible Possible World was observed to return from its much-heralded expedition, appearing on the starline that had shown the unprecedented anomaly. Instantly, without a clock tick’s delay, the Impossible broadcast a market order.
That was already a dozen ways illegal. If the Impossible had made a scientific discovery, it should have broadcast the experimental results openly before attempting to trade on them. Otherwise the result was not profit but chaos, as traders throughout the market refused to deal with you; just conditioning on the fact that you wanted to sell or buy from them, was reason enough for them not to. The whole market seized up as hedgers tried to guess what the hidden experimental results could have been, and which of their counterparties had private information.
The Impossible ignored the rules. It broadcast the specification of a new prediction contract, signed with EMERGENCY OVERRIDE and IMMINENT HARM and CONFESSOR FLAG—signatures that carried extreme penalties, up to total confiscation, for misuse; but any one of which ensured that the contract would appear on the prediction markets at almost the speed of the raw signal.
The Impossible placed an initial order on the contract backed by nearly the entire asset base of its crew.
The prediction’s plaintext read:
In three hours and forty-one minutes, the starline between Huygens and Earth will become impassable.
Within thirty minutes after, every human being remaining in this solar system will die.
All passage through this solar system will be permanently denied to humans thereafter.
(The following plaintext is not intended to describe the contract’s terms, but justifies why a probability estimate on the underlying proposition is of great social utility:
ALIENS. ANYONE WITH A STARSHIP, FILL IT WITH CHILDREN AND GO! GET OUT OF HUYGENS, NOW!)
In the Huygens system, there was almost enough time to draw a single breath.
And then the markets went mad, as every single trader tried to calculate the odds, and every married trader abandoned their positions and tried to get their children to a starport.
“Six,” murmured the Master of Fandom, “seven, eight, nine, ten, eleven—”
A holo appeared within the Command Conference, a signal from the President of the Huygens Central Clearinghouse, requesting (or perhaps “demanding” would have been a better word) an interview with the Lord Administrator of the Impossible Possible World.
“Put it through,” said the Lord Pilot, now sitting in Akon’s chair as the figurehead anointed by the Kiritsugu.
“Aliens?” the President demanded, and then her eye caught the Pilot’s uniform. “You’re not an Administrator—”
“Our Lord Administrator is under sedation,” said the Kiritsugu beside; he was wearing his Confessor’s hood again, to save on explanations. “He placed himself under more stress than any of us—”
The President made an abrupt cutting gesture. “Explain this—contract. And if this is a market manipulation scheme, I’ll see you all tickled until the last sun grows cold!”
“We followed the starline that showed the anomalous behavior,” the Lord Pilot said, “and found that a nova had just occurred in the originating system. In other words, my Lady President, it was a direct effect of the nova and thus occurred on all starlines leading out of that system. We’ve never found aliens before now—but that’s reflective of the probability of any single system we explore having been colonized. There might even be a starline leading out of this system that leads to an alien domain—but we have no way of knowing which one, and opening a new starline is expensive. The nova acted as a common rendezvous signal, my Lady President. It reflects the probability, not that we and the aliens encounter each other by direct exploration, but the probability that we have at least one neighboring world in common.”
The President was pale. “And the aliens are hostile.”
The Lord Pilot involuntarily looked to the Kiritsugu.
“Our values are incompatible,” said the Kiritsugu.
“Yes, that’s one way of putting it,” said the Lord Pilot. “And unfortunately, my Lady President, their technology is considerably in advance of ours.”
“Lord… Pilot,” the President said, “are you certain that the aliens intend to wipe out the human species?”
The Lord Pilot gave a very thin, very flat smile. “Incompatible values, my Lady President. They’re quite skilled with biotechnology. Let’s leave it at that.”
Sweat was running down the President’s forehead. “And why did they let you go, then?”
“We arranged for them to be told a plausible lie,” the Lord Pilot said simply. “One of the reasons they’re more advanced than us is that they’re not very good at deception.”
“None of this,” the President said, and now her voice was trembling, “none of this explains why the starline between Huygens and Earth will become impassable. Surely, if what you say is true, the aliens will pour through our world, and into Earth, and into the human starline network. Why do you think that this one starline will luckily shut down?”
The Lord Pilot drew a breath. It was good form to tell the exact truth when you had something to hide. “My Lady President, we encountered two alien species at the nova. The first species exchanged scientific information with us. It is the second species that we are running from. But, from the first species, we learned a fact which this ship can use to shut down the Earth starline. For obvious reasons, my Lady President, we do not intend to share this fact publicly. That portion of our final report will be encrypted to the Chair of the Interstellar Association for the Advancement of Science, and to no other key.”
The President started laughing. It was wild, hysterical laughter that caused the Kiritsugu’s hood to turn toward her. From the corner of the screen, a gloved hand entered the view; the hand of the President’s own Confessor. “My lady...” came a soft female voice.
“Oh, very good,” the President said. “Oh, marvelous. So it’s your ship that’s going to be responsible for this catastrophe. You admit that, eh? I’m amazed. You probably managed to avoid telling a single direct lie. You plan to blow up our star and kill fifteen billion people, and you’re trying to stick to the literal truth.”
The Lord Pilot slowly nodded. “When we compared the first aliens’ scientific database to our own—”
“No, don’t tell me. I was told it could be done by a single ship, but I’m not supposed to know how. Astounding that an alien species could be so peaceful they don’t even consider that a secret. I think I would like to meet these aliens. They sound much nicer than the other ones—why are you laughing?”
“My Lady President,” the Lord Pilot said, getting a grip on himself, “forgive me, we’ve been through a lot. Excuse me for asking, but are you evacuating the planet or what?”
The President’s gaze suddenly seemed sharp and piercing like the fire of stars. “It was set in motion instantly, of course. No comparable harm done, if you’re wrong. But three hours and forty-one minutes is not enough time to evacuate ten percent of this planet’s children.” The President’s eyes darted at something out of sight. “With eight hours, we could call in ships from the Earth nexus and evacuate the whole planet.”
“My lady,” a soft voice came from behind the President, “it is the whole human species at stake. Not just the entire starline network beyond Earth, but the entire future of humanity. Any incrementally higher probability of the aliens arriving within that time—”
The President stood in a single fluid motion that overturned her chair, moving so fast that the viewpoint bobbed as it tried to focus on her and the shadow-hooded figure standing beside. “Are you telling me,” she said, and her voice rose to a scream, “to shut up and multiply?”
“Yes.”
The President turned back to the camera angle, and said simply, “No. You don’t know the aliens are following that close behind you—do you? We don’t even know if you can shut down the starline! No matter what your theory predicts, it’s never been tested—right? What if you create a flare bright enough to roast our planet, but not explode the whole sun? Billions would die, for nothing! So if you do not promise me a minimum of—let’s call it nine hours to finish evacuating this planet—then I will order your ship destroyed before it can act.”
No one from the Impossible spoke.
The President’s fist slammed her desk. “Do you understand me? Answer! Or in the name of Huygens, I will destroy your ship—”
Her Confessor caught her President’s body, very gently supporting it as it collapsed.
Even the Lord Pilot was pale and silent. But that, at least, had been within law and tradition; no one could have called that thinking sane.
On the display, the Confessor bowed her hood. “I will inform the markets that the Lady President was driven unstable by your news,” she said quietly, “and recommend to the government that they carry out the evacuation without asking further questions of your ship. Is there anything else you wish me to tell them?” Her hood turned slightly, toward the Kiritsugu. “Or tell me?”
There was a strange, quick pause, as the shadows from within the two hoods stared at each other.
Then: “No,” replied the Kiritsugu. “I think it has all been said.”
The Confessor’s hood nodded. “Goodbye.”
“There it goes,” the Ship’s Engineer said. “We have a complete, stable positive feedback loop.”
On screen was the majesty that was the star Huygens, of the inhabited planet Huygens IV. Overlaid in false color was the recirculating loop of Alderson forces which the Impossible had steadily fed.
Fusion was now increasing in the star, as the Alderson forces encouraged nuclear barriers to break down; and the more fusions occurred, the more Alderson force was generated. Round and round it went. All the work of the Impossible, the full frantic output of their stardrive, had only served to subtly steer the vast forces being generated; nudge a fraction into a circle rather than a line. But now -
Did the star brighten? It was only their imagination, they knew. Photons take centuries to exit a sun, under normal circumstances. The star’s core was trying to expand, but it was expanding too slowly—all too slowly—to outrun the positive feedback that had begun.
“Multiplication factor one point oh five,” the Engineer said. “It’s climbing faster now, and the loop seems to be intact. I think we can conclude that this operation is going to be… successful. One point two.”
“Starline instability detected,” the Lady Sensory said.
Ships were still disappearing in frantic waves on the starline toward Earth. Still connected to the Huygens civilization, up to the last moment, by tiny threads of Alderson force.
“Um, if anyone has anything they want to add to our final report,” the Ship’s Engineer said, “they’ve got around ten seconds.”
“Tell the human species from me—” the Lord Pilot said.
“Five seconds.”
The Lord Pilot shouted, fist held high and triumphant: “To live, and occasionally be unhappy!”
This concludes the full and final report of the Impossible Possible World.
- Three Worlds Collide (0/8) by 30 Jan 2009 12:07 UTC; 100 points) (
- Three Worlds Decide (5/8) by 3 Feb 2009 9:14 UTC; 87 points) (
- [SEQ RERUN] True Ending: Sacrificial Fire by 19 Feb 2013 4:38 UTC; 3 points) (
- 30 Mar 2012 17:31 UTC; 3 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 13, chapter 81 by (
- 1 Jun 2012 1:19 UTC; 0 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85 by (
X= Ships unable to escape Huygens
Y= Ships in Babyeater Fleet
Z= Planets Babyeaters Have
over the last parts the pace is too fast, it feels rushed. This leads to a loss in quality of the fiction, imo. Besides, it glosses over some holes in the story, such as: why would akon keep his word under these circumstances? why would the happies not foresee the detonation of huygens? why 3 hours to evacuate...?
Cannibal, what exactly is your point and aren’t you forgetting all the Babyeater casualties we’d expect in the next week?
The point is, that the Normal Ending is the most probable one.
If blowing up Huygens could be effective, why did it even occur to you to blow up Earth before you thought of this?
Hmm. I think I’d rather have agreed to the Superhappies’ deal.
One reason is that with their rate of expansion—which they might be motivated to increase now, too—they’ll probably surprisingly soon find an alternative starline route to humans anyway. (Though even if this was guaranteed not to happen, I probably still would rather have agreed to the deal.)
Also, I think I would prefer blowing up the nova instead. The babyeater’s children’s suffering is unfortunate no doubt but hey, I spend money on ice cream instead of saving starving children in Africa. The superhappies’ degrading of their own, more important, civilization is another consideration.
(you may correctly protest about the ineffectiveness of aid—but would you really avoid ice cream to spend on aid, if it were effective and somehow they weren’t saved already?)
Shutting up and multiplying suggests that we should neglect all effects except those on the exponentially more powerful species.
Cannibal: Heh.
Spuckblase: You know, you’re right. I revised/returned some paragraphs that were deleted earlier, starting after ”...called that thinking sane.”
Simon: It just didn’t happen to cross my mind; as soon as I actually generated the option to be evaluated, I realized its superiority.
Steven: I thought of that, but decided not to write up the resulting conversation about Babyeater populations versus Babyeater expansion rates versus humans etcetera, mostly because we then get into the issue of “What if we make a firm commitment to expand even faster?” The Superhappies can expand very quickly in principle, but it’s not clear that they’re doing so—human society could also choose a much higher exponential on its population growth, with automated nannies and creches.
Aleksei: Part of the background here (established in the opening paragraghs of chapter 1) is that the starline network is connected in some way totally unrelated to space as we know it—no star ever found is within reach of Earth’s telescopes (or the telescopes of other colonies). Once the Huygens starline is destroyed, it’s exceedingly unlikely that any human will find the Babyeaters or the Superhappies or any star within reach of any of their stars’ telescopes, ever again. Of course, this says nothing of other alien species—if anyone ever again dared to follow a nova line.
Vassar pointed out that there’s a problem if you have exponential expansion and any significant starting density; I’d thought of this, but decided to let it be their Fermi Paradox—maybe any sufficiently advanced civilization discovers that it can traverse something other than starlines to go Somewhere Else where intelligence is a set of measure zero, and actually traversing starlines is dangerous because of who might learn about your existence and e.g. threaten to blackmail you.
The Superhappies are not much intimate with deception as opposed to withholding information, and their ability to model other minds they can’t have sex with seems to be weak—no matter what their thinking speed. This was shown in chapter 5, when the Superhappies—who don’t lie, remember—concluded their conversation with “We hope you do not mind waiting.”
The Superhappies don’t seem to have the same attachment to their current personalities, or the same attachment to individual free will, as a human does; so far as they can understand it, they’re just trading utilons with us and offering us a good deal on the transaction. Akon himself believed what he was telling them, which defeats many potential methods of lie detection.
In general, the Superhappies seem to lack numerous human complications such as status quo bias (keeping your current self intact), or preferences for particular rituals of decision (such as individual choice). The resulting gap between their decision processes and ours is not lightly crossed by their mostly sexual empathy, and it’s not as if they can simulate us on the neural level from scratch.
Several commenters earlier asked whether the Superhappies “defected” by firing on the Babyeater ship. From the Superhappy standpoint, they had already offered the Babyeater ship the categorically cooperative option of utility function compromise; in refusing that bargain, the Babyeaters had already defected.
Three hours and forty-one minutes simply happens to be how long it takes to blow up a Huygens-sized star.
The assumption of the True Ending is that the Superhappies were (a) not sure if destroying the apparently cooperative Impossible would encourage Huygens to blow itself up if the Impossible failed to return; and (b) did not have any forces in range to secure Huygens in time, bearing in mind that the Babyeaters were a higher priority. The Normal Ending might have played out differently.
The Superhappies can expand very quickly in principle, but it’s not clear that they’re doing so
We (or “they” rather; I can’t identify with your fanatically masochist humans) should have made that part of the deal, then. Also, exponential growth quickly swamps any reasonable probability penalty.
I’m probably missing something but like others I don’t get why the SHs implemented part of BE morality if negotiations failed.
Historically, humans have not typically surrendered to genocidal conquerors without an attempt to fight back, even when resistance is hopeless, let alone when (as here) there is hope. No, I think this is the true ending.
Nitpick: eight hours to evacuate a planet? I think not, no matter how many ships you can call. Of course the point is to illustrate a “shut up and multiply” dilemma; I’m inclined to think both horns of the dilemma are sharper if you change it to eight days.
But overall a good ending to a good story, and a rare case where a plot is wrapped up by the characters showing the spark of intelligence. Nicely done!
So what is next? 7⁄8 implies a next part, yet it also seems to be finished.
Steven: They’re being nice. That’s sort of the whole premise of the Superhappies—they’re as nice as an alien species can possibly get and still be utterly alien and irrevocably opposed to human values. So nice, in fact, that some of my readers find themselves agreeing with their arguments. I do wonder how that’s going to turn out in real life.
Russell: It’s stated that most colony worlds are one step away from Earth (to minimize the total size of the human network). This means there’s going to be a hell of a lot of ships passing through Earth space (fortunately, space tends to be pretty large).
If you can get anyone at all from Huygens to the starline in 3.6 hours, then from the starline to Huygens and back is at most 7.2 hours. We assume that transit through Earth space is so dense that there are already many ships in close proximity to the Huygens starline. These speeds imply some kind of drive that doesn’t use inertial reaction mass, so it’s also safe to assume that many ships can enter atmosphere.
If they could call in ships from Earth, they could blanket the planet. So, yes, eight hours to evacuate. Eight days would make it practically certain the Superhappies would show up.
Well, it’s not like it’s hard to see reason in Superhappies’ values.
1) I, personally, don’t have a terminal value of non-cannibalism. Actual reason I don’t eat babies now is a result of multiple other values:
I value human life, so I consider killing a human to get some food a huge utility loss.
Any diseases initial owner of meat had contracted are almost 100% transferable to me. Any poisons that accumulated in initial meat owner’s body will also accumulate in mine. Also, humans eat a lot of junk food. Eating humans is bad for one’s health.
So, I don’t have any problem with eating safe-to-eat human meat that is not produced by killing conscious human beings. I would be actually curious to taste, for example, vat-grown clone meat grown from my own cells sample. This position may not be held by an average human, but I don’t think it’s particularly disturbing from transhumanist point of view.
2) I consider humans’ desire to keep their identity and humanity at least in part being status-quo bias. Also, humans don’t really stay themselves for long. For example, 5 year old human is quite different from that same human at 10, 15, 20, 25, etc. Change is gradual, but it’s real and quite big. (Sorry, I have no idea how to measure this quantitatively, but, for example, my 5-year-old-self is an entirely different person from my current self) That said, I personally don’t value status quo all that much.
3) Now if we were to somehow describe set of humanity’s core values we could try to reason against Superhappies. But I fail to see ability to feel pain as necessary part of this set.
All that considered, I don’t see Superhappies’ proposal as horrifying. At least, I don’t think decision to kill 15 billion people to delay Superhappies’ modifying humanity is better than decision to enter species-melting-pot with losing (some of) humanity and getting neat bonuses in form of interstellar peace, and better survival rate for combined 3-component species.
Russell Wallace,
The good fable is something different. The most probable outcome (specially here) is another story. The non discussed advantages Superhappies have accumulated so far and are accumulated also at this very moment—are crucial.
Isn’t this a win-win? The babyeaters get saved too, by the superhappies, who were not cut off from the babyeater starline. The only losers are the superhappies, who can’t “save” the humans.
Julian,
And possibly billions of Huygens humans. Don’t forget those.
It seems like an easy solution would be to just inform the superhappies a little more about oral sex (how humans “eat” their young). They could make a few tweaks, and we’d lose the least (some guys might consider that an improvement).
Anyone else thing the Superhappies sounded a whole lot like the borg?
Something like this possibility occurred to me, but I don’t think this actually is better.
At least, I think I’d have to be walked through the reasoning, since right now I THINK I’d prefer Last Tears to Sacrificial Fire, conditioned on, well, the conditions I list in this comment holding.
ie, giving up pain/suffering in a non wireheading way, and being altered to want to eat non-and-never-have-been-conscious “pre humans” really doesn’t seem all that bad to me, compared to the combined costs of defecting in single iteration PD (again, same ole metarationality arguments hold, especially when we imagine future different species we may encounter) + slaughter of everyone unable to escape from Huygens in time + (and I’m thinking this part is really important) missing out on all the wonderful complexities of interaction with the updated versions of the Babyeaters and the Superhappies.
Heck, the giving up pain/suffering thing in a non wireheading or otherwise-screw-us-up may actually be a right and proper thing. I’m not entirely sure on this, though. As I’ve said elsewhere, main reason I’d want to keep it is simply a vague notion of not wanting to give up, well, any capability I have currently + want to be able to retain the ability to at least decode/comprehend my old memories.
I mean, I guess I get the idea of “our values are, well, our values, so according to them, maintaining those values is right”, but… it sure seems to me that those very same values are telling me “Last Tears” has a higher preference ranking than this does.
Eliezer: How can the superhappies consider their offer fair if they made it up and accept it, and the babyeaters reject it? Why do they think that their payment to the babyeaters is in any way adequate?
It seems to me that they would have to at least ramp up the payment to/costs for the babyeaters, until there was an offer the babyeaters would accept, even if the superhappies would reject it. Then there are points to negotiate from.
But just to make an offer that you predict the other side will reject, and then blow them up? The babyeaters were nicer.
I agree with the President of Huygens; the Babyeaters seem much nicer than the Lotuseaters. Maybe that’s just because they don’t physically have the ability to impose their values on us, though.
Strange this siding with Babyeaters here … strange.
I prefer the ending where we ally ourselves with the babyeaters to destroy the superhappies. We realize that we have more in common with the babyeaters, since they have notions of honor and justified suffering and whatnot, and encourage the babyeaters to regard the superhappies as flawed. The babyeaters will gladly sacrifice themselves blowing up entire star systems controlled by the superhappies to wipe them out of existence due to their inherently flawed nature. Then we slap all of the human bleeding-hearts that worry about babyeater children, we come up with a nicer name for the babyeaters, and they (hopefully) learn to live with the fact that we’re a valuable ally that prefers not to eat babies but could probably be persuaded given time.
P.S. anyone else find it ironic that this blog has measures in place to prevent robots from posting comments?
Personally, I side with the Hamburgereaters. It’s just that the Babyeaters are at the very least sympathetic, I can see viewing them as people. As they’ve said, the Babyeaters even make art!
The remarkable thing about this story is the conflicting responses in the stories. The fact that a relatively homogeneous group of humans can have totally different intuitions about which ending is better and which aliens they prefer, to me, means that actually aliens (or an AI, whatever) have the potential to be, well- alien, far in excess of what is described in this story. Both aliens have value systems which, while different from ours are almost entirely comprehensible. I think we might be vastly underestimating how radically alien aliens could be.
If the alien value systems weren’t comprehensible how could we explain it in a story? Even if we didn’t comprehend it, we could probably still figure out if they deceive. If they don’t, we just figure out their demands and decide if their acceptable. If the demands aren’t, we either try to wipe them out or flee. If they do deceive, we can either guess what their final plan is, or wipe them out or flee. We wouldn’t fully understand their values and we don’t fully understand other humans values. When I see moral dilemma I realize I don’t fully understand my own values. The only way to understand another beings values would be to share thoughts and since we could never know if the thoughts were being shared accurately, we couldn’t be sure what others really value.
How can incomprehensible value systems be represented in story form? With abortive attempts at those who hold them trying to explain them. Like a garuda trying to explain how “theft of choice (of when and with whom to have sex)” is a different crime than “rape” to a human (who doesn’t value individual choice in the same way). Or like a superhappy who just knows that we’d absolutely love to be able to Untranslatable 4.
Only for those stupid robots who can’t read a few funny written letters. Babyeaters level robots can’t talk here.
And now there will be many cults trying as hard as they can to make contact with the superhappies.
I say this because I witnessed many people discussing Brave New World as an actual utopia… Humans can have incompatible values too.
A late response, but for what it’s worth, it could be said that part of the point of the climax and “true” conclusion of this story was to demonstrate how rational actors, using human logic, can be given the same information and yet come up with diametrically opposing solutions.
Eliezer, I hope you’ll consider expanding this story into a novel. I’d buy it.
I wonder, do the people preferring the Babyeaters over the Superhappies, remember that as a necessary consequence of Babyeater values nearly half* of their species is, at any given time, dying in severe pain?
*From part 2, ~10 children eaten/year/adult, ~1 month for digestion to complete.
Psy-Kosh: I don’t see the final situation as a prisoner’s dilemma—by destroying Huygens, humanity shows a preference for mutual “defection” over mutual “cooperation”.
simon: err… descriptive, normative… ? Maybe you genuinely value ice cream over saving lives, but your behavior isn’t a justificatory argument for this, or, given akrasia, even strong evidence.
Nick,
Behavior isn’t an argument (except when it is), but it is evidence. And it’s akrasia when you say, “Man, I really think spending this money on saving lives is the right thing to do, but I just can’t stop buying ice cream”—not when you say “buying ice cream is the right thing to do”. Even if you are correct in your disagreement with Simon about the value of ice cream, that would be a case of Simon being mistaken about the good, not a case of Simon suffering from akrasia. And I think it’s pretty clear from context that Simon believes he values ice cream more.
And it sounds like that first statement is an attempt to invoke the naturalistic fallacy fallacy. Was that it?
It’s evidence of my values which are evidence of typical human values. Also, I invite other people to really think if they are so different.
Eliezer tries to derive his morality from human values, rather than simply assuming that it is an objective morality, or asserting it as an arbitrary personal choice. It can therefore be undermined in principle by evidence of actual human values.
Also, I’m not at all confident that compromising with the Superhappies would be very bad, even before considering the probably larger benefit of them becoming more like us. I think I’d complain more about the abruptness and exogenousness of the change than the actual undesirability of the end state. As others have pointed out, though, a policy of compromise would lead to dilution of everyone’s values into oblivion, and so may be highly undesirable.
More generally and importantly, though, I wonder if the use of wireheading as a standard example of “the hidden complexity of wishes” and of FAI philosophical failure (and it is an excellent example) leads me and/or other Singularitarians to have too negative a reaction to, well, anything that sounds like wireheading, including eliminating pain.
If the Super-Happies were going to turn us into orgasmium, I could see blowing up Huygens. Nor would it necessarily take such an extreme case to convince me to take that extreme measure. But this . . . ?
Sure, I would turn this down if it were simply offered as a gift. But I really, really, cannot see preferring the death of fifteen billion people over it. Although I value the things that the Super-Happies would take away, and I even value valuing them, I don’t value valuing them all that much. Or, if I do, it is very far from intuitively obvious to me. And the more I think about it, the less likely it seems.
I hope that Part 8 somehow makes this ending seem more like the “right” one. Maybe it will be made clear that the Super-Happies couldn’t deliver on their offer without imposing significant hidden downsides. It wouldn’t stretch plausibility too much if such downsides were hidden even from them. They are portrayed as not really getting how we work. As I said in this comment to Part 3, we might expect that they would screw us up in ways that they don’t anticipate.
But unless some argument is made that their offer was much worse than it seemed at first, I can’t help but conclude that the crew made a colossal mistake by destroying Huygens, to understate the matter.
Simon: “Eliezer tries to derive his morality from human values”
I would correct the above to “Eliezer tries to derive his morality from stated human values.”
That’s where many of his errors come from. Everyone is a selfish bastard. But Eliezer cannot bring himself to believe it, and a good fraction of the sorts of people whose opinions get taken seriously can’t bring themselves to admit it.
Tyrrell: Agreed. As I said in what, well, I said, my acceptance of the SuperHappy bargain was conditional in part on, well, the change being engineered in such a way that it doesn’t make the rest of our cognitive structure, values, etc go kablewey. But, given that the changes are as advertised, and there aren’t hidden surprises of the “if I really thought through where this would lead, I’d see this is very very bad” variety, well, sure seems to me that the choice in this ending is the wrong one.
Nick: And to we really want, in general, defection to be the norm? ie, when we next meet up with a different species? ie, by the same ole metarationality arguments (ie, blah blah, it’s not us causing their behavior, but common cause leading to both, our choices arise from algorithms/causality, yada yada yada yada) it would seem that now humanity ought to expect there to be more PD defectors in the universe than previously thought. I think...
This would be a bad thing.
I’m not sure, but was this line:
But, from the first species, we learned a fact which this ship can use to shut down the Earth starline
supposed to read “the Huygens starline”?
Sure, I would turn this down if it were simply offered as a gift. But I really, really, cannot see preferring the death of fifteen billion people over it.
How many humans are there not on Huygens?
Psy-Kosh: Yeah, I meant to have a “as Psy-Kosh has pointed out” line in there somewhere, but it got deleted accidentally while editing.
ad:
I’m pretty sure that it wouldn’t matter to me. I generally find on reflection that, with respect to my values, doing bad act A to two people is less than twice as bad as doing A to one person. Moreover, I suspect that, in many cases, the badness of doing A to n people converges to a finite value as n goes to infinity. Thus, it is possible that doing some other act B is worse than doing A to arbitrarily many people. At this time, I believe that this is the case when A = “allow the Super-Happies to re-shape a human” and B = “kill fifteen billion people”.
Oh, I’m starting to see why the Superhappies are not so right after all, what they lack, why they are alien, in the Normal Ending and in Eliezer’s comments. I think this should have been explained in more detail in the story, because I initially failed to see their offer as anything but good, let alone bad enough to kill yourself. I want untranslatable 2!
Still, if I had been able to decide on behalf of humanity, I would have tried to make a deal—not outright accepted their offer, but negotiated to keep more of what matters to us, maybe by adopting more of their emotions, or asking lesser modifications of them. It just doesn’t look that irreconciliable.
Also, their offer to have the Babyeaters eat nonsentient children sounds stupid—like replacing out friends and lovers with catgirls.
Julian Morrison: The only losers are the superhappies, who can’t “save” the humans.
You are ignoring the human children.
As the superhappies pointed out, they are in a comparable situation as the babyeater children—suffering before having internalized a philosophy that makes it okay, only because the adults want them to. (Which was the whole reason why the superhappies wanted to intervene.)
I think that this is the “right” ending in the sense that I think it’s the kind of thing that typical present-day non-singularitarian humans would do: Be so afraid of being altered that they would consign a large number of their own kind to death rather than face alteration (correct or incorrect, this is the same kind of thinking you see in resistance to life extension and various other H+ initiatives). I’m not confident that it’s what rational humans should do.
Small changes in the story could make me get off the fence in either direction. If the deathtoll for avoiding the Superhappy-Babyeater-Human Weirdtopia was negligible and Huygens could completely evacuate, then I would support blowing it up. Alternatively, if the Supperhappy proposal was stripped of Babyeater values, especially if a slightly better compromise between human and Supperhappy values was possible, then I would not support blowing up Huygens.
I think the Superhappy proposal was bad, but as Tyrrell McAllister said, I’m not sure it was so bad as to justify killing 15 billion people. And most of the problem with the Superhappy proposal was actually due to the Babyeater values that the Superhappies wanted to introduce, not the Superhappy values they wanted to introduce. I really can’t see Babyeaters and humans ever compromising, but I can see Superhappies and humans compromising.
I think if humans had run into the Superhappies alone, or had persuaded them not to force Babyeater values on us, then a mutually acceptable deal with the Superhappies could have been worked out (for instance, what if they left the warning component of pain, but made it less painful?). The Superhappies and humans should’ve gotten together, found a compromise or union of our values, then imposed those values on the Babyeaters (who’s values are more repugnant to us than the Superhappies, and more repugnant to the Superhappies than ours).
To again agree with Tyrrell, if the story had been written such that the Superhappies wanted to do something more drastic and dehumanizing than eliminate “bodily pain, embarrassment, and romantic troubles,” such as turn us into orgasmium, then I would see a much bigger problem with cooperating with them. But, they aren’t, and what they are taking away would alter our humanity, but not destroy it. They aren’t trying to remove complex positive experiences, only negative ones; they aren’t trying to remove humor or art. They do want to have sex with humans, but this is merely weird, not catastrophic, and might even be more acceptable to humans in this story due to their, uh, different attitudes towards sex than ours.
Minus the Babyeater values, the Superhappy deal would merely lead to a Weirdtopia that doesn’t sound all that bad as far as Weirdtopias go, unless there’s something I’m missing (and I think many humans would think it was great). The Superhappy-Human Weirdtopia doesn’t seem bad enough to justify killing 15 billion people. Maybe I just have different intuitions.
The only way—at least within the strangely convenient convergence happening in the story—to remove the Babyeater compromise from the bargain is for the humans to outwit the Superhappies such that they convince the Superhappies to be official go-betweens amongst all three species. This eliminates the necessity for humans to adopt even superficial Babyeater behavior, since the two incompatible species could simply interact exclusively through the Superhappies, who would be obligated by their moral nature to keep each side in a state of peace with the other. It should be taken as a given, after all, that the Superhappies will impose the full extent of their proposed compromises on themselves. They’d theoretically be the perfect inter-species ambassadors.
That said—given the Superhappies’ thinking speed, alien comprehension (plus their selfishness and unreasonable impatience, either of which could be a narrative accident) and higher technological advancement—I’m fairly confident that it would be impossible for this story’s humans to outwit them.
Eliezer, thanks. I mostly read OB for the bias posts and don’t enjoy narratives or stories, but this one was excellent.
Tyrrell, we aren’t told how many humans exist. There could be 15 trillion, so the death of one system may not even equal the number of people who would commit suicide if the SHs had their way.
I don’t find the SHs to be “nice” in any sense of the word. In my reading, they aren’t interested in making humans happy. They can’t be—they don’t even understand the human brain. I think they are a biological version of Eliezer’s smiley face maximizers. They are offended by mankind’s expression of pain (its a negative externality to them) and want to remove what offends them. I don’t think any interstellar civilization would be very successful if they did not learn to ignore or deal with non-physical negative externalities from other races (which would, unfortunately, include baby-eating).
The SH did not even seem to consider the most obvious option (to me, at least) which is to trade and exchange cultures the normal way. Many humans would undoubtedly be drawn in to the SH way of life. I suppose their advanced technology makes the cost of using force relatively low, so this option seemed unacceptable. Still, I wonder why Akon didn’t propose it (or did he)?
I’ve enjoyed the story very much so far, Mr. Yudkowsky.
Incidentally, and fairly off-topic, there’s a “hard” sci-fi roleplaying game that uses an idea similar to the starlines in this story. It can be found here:
http://phreeow.net/wiki/tiki-index.php?page=Diaspora
Come to think of it, I have no idea if there’s //anyone// with an interest in roleplaying games is this forum...if there is, have fun!
Patrick (orthonormal), I’m fairly sure that “Earth” is correct. They haven’t admitted that what they’re going to do is blow up Huygens (though of course the President guesses), and the essential thing about what they’re doing is that it stops the aliens getting to Earth (and therefore to the rest of humanity). And when talking to someone in the Huygens system, talk of “the Huygens starline” wouldn’t make much sense; we know that there are at least two starlines with endpoints at Huygens.
Eliezer, did you really mean to have the “multiplication factor” go from 1.5 to 1.2 rather than to something bigger than 1.5?
(Second attempt at posting this. My first attempt vanished into the void. Apologies if this ends up being a near-duplicate.)
Patrick (orthonormal), I’m pretty sure “Earth” is right. If you’re in the Huygens system already, you wouldn’t talk about “the Huygens starline”. And the key point of what they’re going to do is to keep the Superhappies from reaching Earth; cutting off the Earth/Huygens starline irrevocably is what really matters, and it’s just too bad that they can’t do it without destroying Huygens. (Well, maybe keeping the Superhappies from finding out any more about the human race is important too.)
Are bodily pain and embarrassment really that important? I’m rather fond of romantic troubles, but that seems like the sort of thing that could be negotiated with the superhappies by comparing it to their empathic pain. It also seems like the sort of thing that could just be routed around, by removing our capacity to fall out of love and our preference for monogamy and heterosexuality.
Grant: I don’t find the SHs to be “nice” in any sense of the word. … They are offended by mankind’s expression of pain (its a negative externality to them) and want to remove what offends them.
I’m not entirely sure how “they are offended by helpless victims being forced to suffer against their will and want to remove that” translates into “the SHs aren’t nice in any sense of the word”.
Manon, thanks for pointing that out—I’d left that out of my analysis entirely. I too would like untranslatable 2. It doesn’t change my answer though, as it turns out.
if the SHs find humans via another colony world blowing up earth is still an option. I don’t believe the SHs could have been bargained with. They showed no inclination towards compromise in any other sense than whichever one they have calculated as optimal based on their understanding of humans and babyeaters. Because the SHs don’t seem to value the freedom to make sub-optimal choices (free will) they may also worry much less about making incorrect choices based on imperfect information (this is the only rational reason I can come up with for them wanting to make a snap decision when a flaw in their data could lead to more of what they don’t want: suffering). It is probably the norm for SHs to make snap decisions based on all available data rather than take no action while waiting for more data. They must have had a weird scientific revolution.
Kaj,
They aren’t offended by suffering, but the expression of it. They don’t even understand human brains, and can’t exchange experiences with them via sex, so how could they? Maybe the SHs are able to survive and thrive without processing certain stimuli as being undesirable, but they never made an argument that humans could.Psy-Kosh: I understand the metarationality arguments; my point is that we didn’t defect in a prisoner’s dilemma. PD requires C/C to be preferable to D/D; but if destroying Huygens is defecting for humans, that can only be the case (under the story’s values) if cooperating for Superhappies involves modifying themselves and/or giving us their tech without us being modified. I don’t think that was ever on the table. (BTW, I liked your explanation of why the deal isn’t so bad.)
Simon: Eliezer tries to derive his morality from human values… Common mistake; see No License to Be Human.
Thom: What do you mean by “naturalistic fallacy fallacy”? Google reveals several usages, none of which seem to fit. Also, regardless of Simon’s actual values, it seemed to me he treated the statements “I buy ice cream instead of helping starving children” and “I value ice cream over helping starving children” as identical; this is a fallacy that I happen to find particularly annoying.
Nick,
There is a tendency for some folks to distinguish between descriptive and normative statements, in the sense of ‘one cannot derive an ought from an is’ and whatnot. A lot of this comes from hearing about the “naturalistic fallacy” and believing this to mean that naturalism in ethics is dead. Naturalists in turn refer to this line of thinking as the “naturalistic fallacy fallacy”, as the strong version of the naturalistic fallacy does not imply that naturalism in ethics is wrong.
As for the fallacy you mention, I disagree that it’s a fallacy. It makes more sense to me to take “I value x” and “I act as though I value x” to be equivalent when one is being honest, and to take both of those as different from (an objective statement of) “x is good for me”. This analysis of course only counts if one believes in akrasia—I’m really still on the fence on that one, though I lean heavily towards Aristotle.
So, what about the fact that all of humanity now knows about the supernova weapon? How is it going to survive the next few months?
Reading the comments, I find that I feel more appreciation for the values of the Superhappies than I do for the values of some OB readers.
This probably mostly indicates that Eliezer’s aliens aren’t all that terribly alien, I suppose.
@Wei:
It’s just another A-Bomb, only bigger. By now, they must have some kind of policy that limits problems from A-Bombs and whatever other destructive thingies they have. On the other hand, the damage from blowing up the Sol is even more catastrophic than just blowing up any world: it shatters the humanity, with no prospect of reunion.
Nick, note that he treats the pebblesorters in parallel with the humans. The pebblesorters’ values lead them to seek primeness and Eliezer optimistically supposes that human values lead humans to seek an analogous rightness.
What Eliezer is trying to say in that post, I think, is that he would not consider it right to eat babies even conditional on humanity being changed by the babyeaters to have their values.
But the choice to seek rightness instead of rightness’ depends on humans having values that lead to rightness instead of rightness’.
Simon: Well, the understanding I got from all this was that human development would be sufficiently tweaked so that the “Babies” that humans would end up eating would not actually be, nor ever have yet been conscious. Non conscious entities don’t seem to really be too tied to any of my terminal values, near as I can tell.
Of course, if the alteration was going to lead to us eating conscious babies, that’s a whole other thing, and if that was the case, I’d say “blow up Huygens twice as hard, then blow it up again just to be sure.”
However, this seems unlikely, given that the whole point of that part of the deal was to give something to the babyeaters in return for tweaking them so that they eat their babies before they’re conscious or whatever. The whole thing would end up more or less completely pointless from (near as I can tell) even a SuperHappy point of view if they simply exchanged us for the babyeaters. That would just be silly… in a really horribly disturbing way.
I agree with tarleton I think. Can someone briefly summarize what is so objectionable about the superhappy compromise? It seems like a great solution in my view.what of importance is humanity actually giving up? They have to eat non-sentient children. Hard to see why we should care about that when we will never once feel a Gag reflex and no pain is caused to anyone. Art and science will advance not retreat due to superhappy technology being applied to it. The sex will be better and there will be other cool new emotions which will have positive value to us. The solution is not sphexish in a singularity fun sense and immediately After modification any lingering doubts won’t exist. I must be missing some additional aspect of life that people think will get lost? I would not be at all surprised if humanity’s cev makes them essentially the superhappy people.
Dan: Obviously part 8 is the ‘Weirdtopia’ ending!
(I mean, we’ve had utopia, dystopia, and thus by Eliezer’s previous scheme we are due for a weirdtopia ending.)
This lurker has objections to being made to eat his own children and being stripped of pain: SH plan is not a compromise, but an order. From the position of authority, they can make us agree to anything by debate or subterfuge or outright exercise of power; the mere fact that they seem so nice and reasonable changes nothing about their intentions, which we do not know and which we cannot trust. How do we know that the SH ship’s crew are true representatives of the rest of their race? Why is it that they seemingly trust/accept Akon as the representative of the Entire Human Race? I think the attitude of Niven’s ARM Paranoids is proper here (“Madness Has Its Place”).
As an aside, I am glad that I have read the Motie books, and even more glad that I happened to start watching Fate/Stay Night last week. To be this entertained, I would have to be my teenaged self reading The Fountainhead and The Mote in God’s Eye for the first time and simultaneously. Thank you, Eliezer, for making me mull alternative ethics and lol simultaneously for the first time.
Don’t expand this into a novel, it was superb but I’d rather see a wider variety of short works exploring many related themes.
Perhaps this is just me not buying the plot justifications that set up the strategic scenario, but I would be included to accept the SupperHappy deal because of a concern that the next species that comes along might have high technology and not be so friendly. I want the defense of the increased level of technology, stat. Sure it involves giving up some humanity but better than giving up all of humanity. Once I find that there are 2 alien species with star travel, I get really really worried about the 3rd, 4th, etc. Maybe one of them comes from a world w/o SIAI, w/o Friendly AI, and it is trying to paperclip the universe. Doubling even faster than the SuperHappys because it doesn’t stop for sex (it has rewritten its utility function so paperclipping and acts that facilitate maximal speed of paperclipping are sex).
I would accept the changes to human nature implied by the SuperHappy deal to prevent being paperclipped.
I agree that this section of the story feels a bit rushed, but maybe that is the intention.
I don’t really like how easily these people in high positions of authority are folding under the pressure. The President in particular was taken out with what was to me very little provocation.
Plus, I just can’t relate to a human race that is suicidally attached to preserving its pain and hardships. The offer made by the Superhappies is just not that bad.
Eliezer tries to derive his morality from stated human values.
In theory, Eliezer’s morality (at least CEV) is insensitive to errors along these lines, but when Eliezer claims “it all adds up to normality,” he’s making a claim that is sensitive to such an error.
I agree that deriving morality from stated human values is MUCH more ethically questionable than deriving it from human values, stated or not, and suggest that it is also more likely to converge. This creates a probable difficulty for CEV.
It seems to me that if it’s worth destroying Huygens to stop the Superhappies it’s plausibly worth destroying Earth instead to fragment humanity so that some branch experiences an infinite future so long as fragmentation frequency exceeds first contact frequency. Without mankind fragmented, the normal ending seems inevitable with some future alien race. Shut-up-and-multiply logic returns error messages with infinite possible utilities, as Peter has formally shown, and in this case it’s not even clear what should be multiplied.
Psy-Kosh: I was using the example of pure baby eater values and conscious babies to illustrate the post Nick Tarleton linked to rather than apply it to this one.
Michael: if it’s “inevitable” that they will encounter aliens then it’s inevitable that each fragment will in turn encounter aliens, unless they do some ongoing pre-emptive fragmentation, no? But even then, if exponential growth is the norm among even some alien species (which one would expect) the universe should eventually become saturated with civilizations. In the long run, the only escape is opening every possible line from a chosen star and blowing up all the stars at the other ends of the lines.
Hmm. I guess that’s an argument in favour of cooperating with the superhappies. Though I wonder if they would still want to adopt babyeater values if the babyeaters were cut off, and if the ship would be capable of doing that against babyeater resistance.
It’s interesting to note that those oh-so-advanced humans prefer to save children to saving adults, even though there don’t seem to be any limits to natural lifespan anymore.
At our current tech-level this kind of thing can make sense because adults have less lifespan left; but without limits on natural lifespan (or neural degradation because of advanced age) older humans have, on average, had more resources invested into their development—and as such should on average be more knowledgeable, more productive and more interesting people.
It appears to me that the decision to save human children in favor of adults is a result of executing obsolete adaptions as opposed to shutting up and multiplying. I’m surprised nobody seems to have mentioned this yet—am I missing something obvious?
There are at least a couple of factors I see as relevant: choice, responsibility, and the notion of giving them a chance to live.
Children, necessarily, have much of their life controlled for them. They are not allowed to make a lot of important choices for themselves, whether they want to or not. So, it is important for those making choices for them to make the right ones, to justify not allowing them that control. I’m not sure I’m quite articulating the concept here, but...
It is the explicit social, legal, and moral obligation of parents to appropriately care for their children. In a broader sense, it is a general obligation of society to care for the weak, helpless, etc.
Part of why the death of a young person is a greater relative tragedy today is that they have greater remaining potential lifespan, but part of it, in many peoples’ mind, is that they have not yet had a chance to experience various major things. You’d feel a little sad for someone who, for example, died without ever having been in love, even if the person is 83, right? A little kid has missed a lot of experiences.
Sebastian,
Here there is an ambiguity between ‘bias’ and ‘value’ that is probably not going to go away. EY seems to think that bias should be eliminated but values should be kept. That might be most of the distinction between the two.
Are bodily pain and embarrassment really that important? I’m rather fond of romantic troubles, but that seems like the sort of thing that could be negotiated with the superhappies by comparing it to their empathic pain. It also seems like the sort of thing that could just be routed around, by removing our capacity to fall out of love and our preference for monogamy and heterosexuality.
The problem with much of the analysis is that the culture already has mutated enough to allow for forcible rape to become normative.
I’m not sure that the supperhappy changes as to “romantic troubles” are much more change than that.
humanity is doomed in this scenario. the Lotuseaters are smarter and the gap is widening. Theres no chance humans can militarily defeat them now or any point in the future. as galactic colonization continues exponentially, eventually they will meet again, perhaps in the far future. but the Lotusfolk will be even stronger relatively at that point. the only way humans can compete is developing an even faster strong-AI, which carries large chance of ending humanity on its own.
so the choices are:
-accept Lotusfolk offer now
-blow up the starline, continue expanding as normal, delay the inevitable
-blow up the starline, gamble on strong AI, hopefully powering-up human civ to the point it can destroy the Lotusfolk when they meet again
this choice set is based on the assumption that the decider values humanity for its own sake. I value raw intelligence, the chassis notwithstanding. so the only way I would not choose option 1 is if I thought that the Lotusfolk, while smarter currently, were disinclined to develop strong-AI and go exponential, and thus w humanity under their dominion, no one would. if humans could be coaxed into building strong AI in order to counter the looming threat of Lotusfolk assimiliation, and thus create something smarter than any of the 3 species combined, then I would choose option 3.
This was an interesting story, though I wonder if the human capitulation either option offers is the only option—bluntly, the superhappys don’t strike me as being that tough, even if their technology is higher and development is orders of magnitude faster than ours they are completely unwilling to accept suffering even if it comes through their own sense of empathy, all humans have to do is offer a credible threat of superhappy suffering and convince them to modify themselves not to care about our suffering. i.e. “We will resist you every step of the way thus maximizing our suffering, plus you cannot be 100% sure you’ll be able to convert us without us inflicting at least some harm”
Hm I think the spam guard ate my last comment so I’ll repeat:
I don’t think the SH are really up to converting an unwilling humanity despite all their superiority they are fundamentally unwilling to be inconvenienced so humans only have to successfully argue their case by pointing out the probable mass suicides depicted in the alternate ending and that SH society might take some casualties, since they are almost completely risk averse even the possibility of losing a single ship might be enough to scare them off.
It’s a bit like the world being unwilling to intervene in North Korea despite the overwhelming advantage, it’s just not worth a single life lost to us.
Given the SH’s willingness to self modify it would be easier to convince them to ratchet down their empathy for us to tolerable levels
Bugger there’s my original comment after all. Whoops.
The only real solutions for humanity seem to be either to supernova the colony worlds star, or, if this is unacceptable, to prepare supernova devices around all human stars and threaten to supernova everyone if the Super Happies enter our space.
I can’t help but wonder why the humans in this story did not simply say “We long ago invented chemical means for individual humans to achieve perfect, undifferentiated happiness, but most individuals seem to consider themselves happier without their constant usage.” This is perfectly true, and, if it perhaps would not have completely satisfied the Super-Happies (no doubt they would want immature humans anesthetized until they were old enough to choose) it might at least have served as a significant piece of evidence. I can hardly imagine a society that has legalized rape retaining a taboo against the use of Ecstasy or some future derivative thereof.
But the individuals don’t consider themselves happier without their constant usage. It’s just that happiness isn’t these individuals’ supreme value, the same way it seems to be for the SuperHappies.
Consider a human mother who was told that she could take a pill and live in perfect happiness ever after, but her children would have to die for it. If she loves her children, she won’t take the pill; it doesn’t matter that she knows she would be happy with the pill, it’s just that her children’s well-being is more important to her than her own future happiness.
Oh, I see. I’ve been confusing happiness as a state of present bliss with happiness as a positive feeling regarding a situation, which are not quite the same thing. Excellent reply, thank you.
Alternately, imagine the pill would alter the structure of her mind so that she would become the sort of being that would be happy about her children dying?
So even in the case where it relates to situations, one might reject such a pill.
Well, the Superhappies would have already known that if they read the data dump correctly...
In that case, I notice that I find myself confused.
The Superhappies don’t have perfect, undifferentiated happiness. Note their shock and distress when they find out about the lifestyle of the Babyeaters. They’ve simply excised some sources of unhappiness from their psychology.
Which ones?
Embarassment, relationship anxiety.… I’d have to reread the story to remember the full list, it’s been over a year since I read it.
The thing I wonder is why humanity didn’t insist that the superhappies refrained from acting on humanity until they had a better understanding of us. They made a snap judgement, that was obviously incomplete given what fraction of humanity opted for suicide under their plan—given more time, they likely could have come up with a plan that would reach their desired aims (not being made unhappy by humanity) with a minimum of distress to all parties...
I wonder to what degree civilization is going to fracture a few years after the shock. At the very least, I’d wager that several large deontological factions/communities/cults would spring up with a sentiment of “Look where utilitarianism led us!”, possibly taking over some colonies. Violence or major secession are more questionable.
I was told that Yudkovsky is a writer who does not allow his characters to act irrationally solely in order to move forward the plot. On the evidence of this text, however, that is not true at all. Plot has more holes than you can drive a truck through. Characters are cardboard cutouts, their motivations opaque. Author should stick to his day job, whatever it is.