Three Worlds Collide (0/8)
“The kind of classic fifties-era first-contact story that Jonathan Swift
might have written, if Jonathan Swift had had a background in game
theory.”
-- (Hugo nominee) Peter Watts, “In Praise of Baby-Eating”
Three Worlds Collide is a story I wrote to illustrate some points on naturalistic metaethics and diverse other issues of rational conduct. It grew, as such things do, into a small novella. On publication, it proved widely popular and widely criticized. Be warned that the story, as it wrote itself, ended up containing some profanity and PG-13 content.
PDF version here.
- References & Resources for LessWrong by 10 Oct 2010 14:54 UTC; 167 points) (
- Superintelligent AI is necessary for an amazing future, but far from sufficient by 31 Oct 2022 21:16 UTC; 132 points) (
- Niceness is unnatural by 13 Oct 2022 1:30 UTC; 125 points) (
- The Baby-Eating Aliens (1/8) by 30 Jan 2009 12:07 UTC; 125 points) (
- The Super Happy People (3/8) by 1 Feb 2009 8:18 UTC; 123 points) (
- Being nicer than Clippy by 16 Jan 2024 19:44 UTC; 109 points) (
- Epilogue: Atonement (8/8) by 6 Feb 2009 11:52 UTC; 101 points) (
- “Fixing Adolescence” as a Cause Area? by 23 Jan 2022 10:57 UTC; 89 points) (EA Forum;
- War and/or Peace (2/8) by 31 Jan 2009 8:42 UTC; 89 points) (
- Interlude with the Confessor (4/8) by 2 Feb 2009 9:11 UTC; 88 points) (
- Three Worlds Decide (5/8) by 3 Feb 2009 9:14 UTC; 87 points) (
- Normal Ending: Last Tears (6/8) by 4 Feb 2009 8:45 UTC; 86 points) (
- AGI x-risk timelines: 10% chance (by year X) estimates should be the headline, not 50%. by 1 Mar 2022 12:02 UTC; 69 points) (EA Forum;
- True Ending: Sacrificial Fire (7/8) by 5 Feb 2009 10:57 UTC; 69 points) (
- Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) by 28 Sep 2023 2:48 UTC; 66 points) (
- How LDT helps reduce the AI arms race by 10 Dec 2023 16:21 UTC; 65 points) (
- The role of neodeconstructive rationalism in the works of Less Wrong by 1 Apr 2010 14:17 UTC; 44 points) (
- The ignorance of normative realism bot by 18 Jan 2022 5:26 UTC; 43 points) (
- Concrete positive visions for a future without AGI by 8 Nov 2023 3:12 UTC; 41 points) (
- 19 Jul 2021 16:47 UTC; 38 points) 's comment on Is the argument that AI is an xrisk valid? by (
- Superintelligent AI is necessary for an amazing future, but far from sufficient by 31 Oct 2022 21:16 UTC; 35 points) (EA Forum;
- LessWrong analytics (February 2009 to January 2017) by 16 Apr 2017 22:45 UTC; 32 points) (
- Kardashev for Kindness by 11 Jun 2021 22:22 UTC; 30 points) (EA Forum;
- Free Hard SF Novels & Short Stories by 10 Oct 2010 12:12 UTC; 28 points) (
- Stories and altruism by 20 May 2019 8:37 UTC; 26 points) (EA Forum;
- The ignorance of normative realism bot by 18 Jan 2022 5:18 UTC; 25 points) (EA Forum;
- Being nicer than Clippy by 16 Jan 2024 19:44 UTC; 25 points) (EA Forum;
- (Moral) Truth in Fiction? by 9 Feb 2009 17:26 UTC; 25 points) (
- Why the Problem of the Criterion Matters by 30 Oct 2021 20:44 UTC; 24 points) (
- Why didn’t people (apparently?) understand the metaethics sequence? by 29 Oct 2013 23:04 UTC; 23 points) (
- Parallelizing Rationality: How Should Rationalists Think in Groups? by 17 Dec 2012 4:08 UTC; 21 points) (
- Niceness is unnatural by 13 Oct 2022 1:30 UTC; 20 points) (EA Forum;
- 23 May 2011 9:28 UTC; 20 points) 's comment on What makes Less Wrong awesome? by (
- Babyeater’s dilemma by 15 Nov 2011 20:15 UTC; 19 points) (
- Link: Three Worlds Collide analysis by 25 Jan 2011 2:33 UTC; 17 points) (
- After Alignment — Dialogue between RogerDearnaley and Seth Herd by 2 Dec 2023 6:03 UTC; 15 points) (
- Are ya winning, son? by 9 Aug 2022 0:06 UTC; 14 points) (
- 4 Dec 2015 17:57 UTC; 14 points) 's comment on LessWrong 2.0 by (
- Fiction of interest by 29 Apr 2009 18:47 UTC; 14 points) (
- 8 Feb 2016 21:15 UTC; 13 points) 's comment on The Fable of the Burning Branch by (
- The Outside Critics of Effective Altruism by 5 Jan 2015 18:37 UTC; 12 points) (EA Forum;
- 28 Jan 2012 22:57 UTC; 12 points) 's comment on I’ve had it with those dark rumours about our culture rigorously suppressing opinions by (
- 29 Aug 2011 18:26 UTC; 12 points) 's comment on Help Fund Lukeprog at SIAI by (
- 9 Feb 2022 16:26 UTC; 11 points) 's comment on Greg_Colbourn’s Quick takes by (EA Forum;
- 9 Feb 2024 18:00 UTC; 11 points) 's comment on Matthew_Barnett’s Quick takes by (EA Forum;
- 6 May 2021 21:14 UTC; 11 points) 's comment on Thoughts on Ad Blocking by (
- 4. A Moral Case for Evolved-Sapience-Chauvinism by 24 Nov 2023 4:56 UTC; 10 points) (
- Perfectly Friendly AI by 24 Jan 2011 19:03 UTC; 10 points) (
- 20 Sep 2022 3:32 UTC; 10 points) 's comment on Framing AI Childhoods by (
- 11 Dec 2012 4:07 UTC; 10 points) 's comment on By Which It May Be Judged by (
- 11 Nov 2011 8:36 UTC; 9 points) 's comment on Transhumanism and Gender Relations by (
- How LDT helps reduce the AI arms race by 10 Dec 2023 16:21 UTC; 8 points) (EA Forum;
- 3 Mar 2011 17:10 UTC; 8 points) 's comment on Welcome to Less Wrong! by (
- How to better understand and participate on LW by 8 Oct 2010 16:11 UTC; 8 points) (
- 29 Sep 2011 3:42 UTC; 8 points) 's comment on Not for the Sake of Happiness (Alone) by (
- What do the baby eaters tell us about ethics? by 6 Oct 2019 22:27 UTC; 7 points) (
- An inducible group-”meditation” for use in rationality dojos by 2 Jan 2012 10:32 UTC; 7 points) (
- What is the strongest argument you know for antirealism? by 12 May 2021 10:53 UTC; 7 points) (
- 17 Dec 2010 22:06 UTC; 7 points) 's comment on Folk grammar and morality by (
- 23 Apr 2013 15:01 UTC; 7 points) 's comment on Welcome to Less Wrong! (5th thread, March 2013) by (
- 11 Nov 2012 19:05 UTC; 7 points) 's comment on Rationality Quotes November 2012 by (
- 1 Feb 2011 20:38 UTC; 6 points) 's comment on What is Eliezer Yudkowsky’s meta-ethical theory? by (
- 1 Jan 2010 22:43 UTC; 6 points) 's comment on Open Thread: January 2010 by (
- 3 May 2010 6:50 UTC; 6 points) 's comment on Open Thread: May 2010 by (
- 14 Mar 2009 17:41 UTC; 6 points) 's comment on Closet survey #1 by (
- 28 Sep 2010 20:46 UTC; 6 points) 's comment on Open Thread September, Part 3 by (
- 12 Oct 2022 21:32 UTC; 5 points) 's comment on Parfit + Singer + Aliens = ? by (EA Forum;
- 18 May 2013 17:19 UTC; 5 points) 's comment on How to Build a Community by (
- 29 Apr 2010 13:20 UTC; 5 points) 's comment on Attention Less Wrong: We need an FAQ by (
- 31 Oct 2010 1:33 UTC; 5 points) 's comment on Currently Buying AdWords for LessWrong by (
- 5 Dec 2011 18:03 UTC; 5 points) 's comment on On “Friendly” Immortality by (
- Believable Promises by 16 Apr 2018 16:17 UTC; 5 points) (
- 13 May 2022 23:02 UTC; 5 points) 's comment on The Last Paperclip by (
- 6 Oct 2010 5:08 UTC; 5 points) 's comment on The Irrationality Game by (
- 29 Apr 2016 13:04 UTC; 4 points) 's comment on Open Thread April 25 - May 1, 2016 by (
- 9 Oct 2009 1:28 UTC; 4 points) 's comment on Let them eat cake: Interpersonal Problems vs Tasks by (
- 9 Jul 2020 23:14 UTC; 4 points) 's comment on Open & Welcome Thread—July 2020 by (
- 4 Feb 2013 18:01 UTC; 4 points) 's comment on Humor: GURPS Friendly AI by (
- 16 Dec 2010 16:55 UTC; 4 points) 's comment on One Chance (a short flash game) by (
- 23 Jan 2015 2:23 UTC; 3 points) 's comment on Meetup : Raleigh, NC (RTLW) Discussion Meetup by (
- 22 Nov 2009 3:38 UTC; 3 points) 's comment on Welcome to Less Wrong! by (
- 15 Nov 2011 22:12 UTC; 3 points) 's comment on Babyeater’s dilemma by (
- 7 Jan 2011 11:09 UTC; 3 points) 's comment on References & Resources for LessWrong by (
- 26 Oct 2019 20:14 UTC; 3 points) 's comment on The Pit by (
- 30 Mar 2012 17:31 UTC; 3 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 13, chapter 81 by (
- 7 Jan 2018 21:44 UTC; 2 points) 's comment on Cosmic EA: How Cost Effective Is Informing ET? by (EA Forum;
- 16 Feb 2012 18:25 UTC; 2 points) 's comment on Open Thread, February 15-29, 2012 by (
- 2 Jun 2009 6:45 UTC; 2 points) 's comment on Open Thread: June 2009 by (
- 10 Sep 2022 16:48 UTC; 2 points) 's comment on ethics and anthropics of homomorphically encrypted computations by (
- 5 Mar 2015 22:57 UTC; 2 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 114 + chapter 115 by (
- 27 Feb 2009 20:49 UTC; 2 points) 's comment on The Most Important Thing You Learned by (
- 23 Feb 2010 9:36 UTC; 2 points) 's comment on Boo lights: groupthink edition by (
- 11 Feb 2017 15:36 UTC; 2 points) 's comment on The Alpha Omega Theorem: How to Make an A.I. Friendly with the Fear of God by (
- 7 Aug 2011 13:31 UTC; 1 point) 's comment on Beware of Other-Optimizing by (
- 9 Jun 2022 2:34 UTC; 1 point) 's comment on AGI Safety FAQ / all-dumb-questions-allowed thread by (
- 1 Jun 2011 14:52 UTC; 1 point) 's comment on Natural wireheadings: formal request. by (
- 31 Jul 2009 18:26 UTC; 1 point) 's comment on An Alternative Approach to AI Cooperation by (
- 2 Jun 2010 22:21 UTC; 1 point) 's comment on Open Thread: June 2010 by (
- 28 Aug 2011 6:26 UTC; 1 point) 's comment on Welcome to Less Wrong! (2010-2011) by (
- 3 Mar 2015 2:37 UTC; 0 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 by (
- 14 Oct 2012 7:05 UTC; 0 points) 's comment on Good transhumanist fiction? by (
- 28 May 2013 20:43 UTC; 0 points) 's comment on Is a paperclipper better than nothing? by (
- 13 Oct 2016 20:02 UTC; 0 points) 's comment on Open thread, October 2011 by (
- 2 Jan 2010 9:44 UTC; 0 points) 's comment on Open Thread: January 2010 by (
- 25 Aug 2012 4:10 UTC; 0 points) 's comment on [SEQ RERUN] Points of Departure by (
- 28 Jul 2010 15:21 UTC; 0 points) 's comment on Fight Zero-Sum Bias by (
- 28 Jul 2010 15:18 UTC; 0 points) 's comment on Fight Zero-Sum Bias by (
- A few minor comparisons of the results of Genetic egineering and Natural selection by 20 Mar 2011 10:36 UTC; 0 points) (
- 6 Apr 2015 21:29 UTC; 0 points) 's comment on In what language should we define the utility function of a friendly AI? by (
- 12 May 2012 7:47 UTC; 0 points) 's comment on Alan Carter on the Complexity of Value by (
- 2 Mar 2015 10:39 UTC; -1 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 by (
- 6 Mar 2013 3:20 UTC; -2 points) 's comment on The more privileged lover by (
- 25 Dec 2011 6:19 UTC; -4 points) 's comment on Details of lab-made bird flu won’t be revealed [link] by (
- Deontological Decision Theory and The Solution to Morality by 10 Jan 2011 16:15 UTC; -8 points) (
In the ideal case, would you recommend reading each chapter separately, with the day-long pause in between to digest, or reading them all at once? Or perhaps you would like to hear feedback from people who have taken each approach to see which works better.
wellll.. it’s kinda fun, Eleizer, I guess so I’ll give you the benefit of the doubt and keep reading …. but....but… is this format quite right for OB?
Would this series not be better in - ooh, I don’t know—a new and more open sister site of some kind, perhaps with the key points written up at the end and posted on to OB, if they seem popular? Or am I wrong.
When it’s done, is there any chance you’ll stick it online in an ereader compatible format? PDF is ok, but EPUB would be better.
I don’t tend to read very long things on a computer, so having it in a more friendly format would be nice.
Andrew, will try to remember. Remind me when it’s done.
Botogol, Less Wrong isn’t ready yet, and now is when people are asking me about what sort of values aliens might have.
could you please make an EPUB version, as for your Harry Potter fanfiction? With PDFs you can’t change font size, so it’s a big pain to read with an ebook reader. thanks
yup. alien ones.
List of allusions I managed to catch (part 1):
Alderson starlines—Alderson Drive Giant Science Vessel—GSV—General Systems Vehicle Lord Programmer—allusion to the archeologist programmers in Vernor Vinge’s A Fire Upon the Deep? Greater Archive—allusion to Orion’s Arm’s Greater Archives?
Good times!
Oh, on the subject of stories and that post about dreams, that reminds me: You had said to remind you to tell us about your “most philosophically interesting dream”
Three worlds collide?
As of part 1, we’ve seen two...
Excellent. I was reluctant to start reading at first, but when I did, I found it entertaining. This should be a TV series. :)
I too felt a bit anxious about reading this but was glad I did! It’s entertaining to read and very interesting to think about.
Thanks.
@Eliezer—nope, sorry, 3⁄8 now, seems like 10,000 words of cod fiction and OB has truly jumped the shark.
There’s load of good ideas there but praps you shoulda’ waited until LessWrong was working AllRight.
really good writing. keep them coming :-)
botogol, what is cod fiction? Is COD an acronym for “capacity on demand” or “change of direction”?
Eliezer,
Personally, I liked the Babyeaters. At the outset of your story, I thought (1) that their babyeating would be held up as an example of the triumph of rationality (around population control), and (2) that their refusal to modify themselves would be based on their recognition that the specific act of babyeating nurtured and protected a more general capacity and respect for rational thought. I thought that Babyeating was being proposed as a bootcamp for overcoming bias. Maybe this idea would be interesting to explore?
In general, an interesting story. I did not find it possibly coercive or deceptive, as some other commentators did, and despite wide disagreement with what I take to be your own views; like your piece on truth, -- “The Simple Truth”, I believe it was, -- I found it clear, deftly-made, and thought-provoking.
Even if one wishes to argue the virtues of mass murder as a method of intentional population control, which I find quite horrifying enough, I would hope that violent assault and month-long torture are not one’s preferred methods.
The story is more of a way of toying with the subjective nature of morality. The takeaway of the story is not whether or not baby-eating is right or wrong—an objective answer to this question is impossible—but of the difficulties that arise during the interactions of moral agents with incompatible values.
Human conflicts between nations have been about conflicts of interest, political conflicts within nations are often about conflicts of values...but what happens when someones moral values are fundamentally alien to your own?
I thought this was very, very good, probably my favorite of your writings that I’ve read so far. I think it’s quite a bit better than the Harry Potter fanfic—which is itself good fanfic, but “good for fanfic” is a much more forgiving category than “good fiction.” When you mentioned trying to get a Hugo for HPMOR I thought you were revealing an embarrassing inability to self-calibrate your own skills as a writer: HPMOR is not good enough to be publishable (even leaving copyright issues aside), and it’s very far from being at a Hugo-winning level. It is not, however, ridiculous to think that fiction of the “Three Worlds Collide” caliber could compete for Hugo-type prizes.
In summary, I’d like to see more of your original fiction, and if you chose to I don’t doubt that you could publish stories in major-market genre magazines.
Really? What makes HPMoR not good enough to be publishable?
50 Shades of Gray was a twilight fanfiction, and apparently it was good enough to be publishable.
What does it actually mean for a piece of fiction to be ‘good’? HPMoR can be an author tract at times, but it also has one of the most intricate plots I’ve ever read, specifically designed so that thinking about it with knowledge of bayesian cognition and rationality allows the reader to discover more things about the story. There aren’t many stories like this.
What about the actual quality of the writing isn’t good enough? I would say it is at least as good, in terms of whatever it is that makes me enjoy it, as 80% of all fiction I’ve ever read.
And sometimes when I read Humanism Part 3 I think it’s better than 100% of other fiction I’ve read.
Keep in mind that 50 Shades of Gray was “good” enough to be popular among roughly the same target audience that were already fans of Twilight.
That said, I’m also very curious about why Siduri thinks that HPMoR isn’t good enough to be publishable. It’s certainly not without its flaws, and I think a professional editor would improve rather than detract from it, but I’ve found it to be a more fun read than any published book I’ve read since before it even started being written. A book doesn’t need to be flawless to be publishable, it just needs to be able to find an audience willing to buy it.
Unlike 50 Shades of Gray, it’s almost certainly utterly unpublishable though, because removing it from the context of the Harry Potter setting would destroy the basis of the plot.
Honestly, I don’t really understand what the grandparent could be thinking. I may be said to know something about literature at this point, and the literary level of HPMOR is far, far above 3WC. Maybe grandparent only read the first 10 chapters or something, I was still somewhat catching my stride then (not to mention writing chapters with much less editing and effort invested).
There was some interest from professional SF writers in 3WC (e.g. Peter Watts) but nowhere near the level of buzz at SF conventions that’s been reported to me for HPMOR.
Much though I like HPMOR, it’s simply too long to keep the same level of sustained awesome. 3WC is awesome all the way through. HPMOR is probably better at it’s best, but on average it’s simply not as good—although it’s certainly better at times, it can also be worse at times, because it has more space to make mistakes and recover.
In my expert opinion.
If I had to guess, I’d say it’s a genetic heuristic thing. Assuming that since HPMOR is a fanfic, and since most of the possible arguments for why a particular fanfic is good are wrong, arguments for why HPMOR is good must be wrong.
He also said it wasn’t good enough to publish, but when asked why, said there were legal issues with publishing fanfiction, which isn’t evidence either way for its ‘goodness’. This makes me think he has no arguments addressing the actual goodness of the writing.
Just because someone has trouble articulating the issues inherent in something does not necessarily mean they are unable to recognize that said issues exist.
I think the argument, however, is moot—HPMOR is on the internet, and therefore already has been “published” in a sense.
HPMOR has several issues, however:
1) The writing has a very odd quality to it. After reading the comments on this site for a while, as well as the dialogue in this story, it is obvious that there is some sort of shared language amongst this group of rationalists that is employed by the author of HPMOR—or that many people here simply imitate his writing style. This strange quality to it leads it to feel somewhat strange and stilted.
2) The work meanders too much. It is not written concisely, and a paragraph is often used when a sentence would do.
3) The work is inaccessible to a general audience. There is a certain sort of person who enjoys works like that. I suspect that the internet is thusly an ideal medium for reaching them.
4) The work is a work of fanfiction, and thus is unpublishable.
5) The work is a work of fanfiction, and as such, creates certain expectations regarding the characters and the setting which can be disconcerting.
Indeed, the sheer amount of work that went into HPMOR kind of saddens me, like a great deal of fanfiction that I read. It is not that there cannot be good fanfiction, but that fanfiction has certain constraints on it (including inability to publish) which hurt it. I would have loved to have seen something like HPMOR which was a wholly original rather than a derived work, and it strikes me that many who write fanfiction are rather limiting themselves by not allowing themselves to go beyond such.
I would be surprised if fanfiction for a popular piece of media didn’t get far more eyes looking at it than equally-good (or equally-poor) original work, even taking into account the larger number of eyes drawn to published work.
So if my goal is to maximize number of eyes looking at my words, the constraints of fanfiction might hurt it less (in terms of what I value) than the constraints of original work.
I like fanfic.
I don’t, in general, post even constructive criticism on fanfic unless I’m specifically asked to (as a beta reader or something) and even then I will sandwich the con-crit between the most heaping helpings of praise that I can come up with for the work as a whole. The reason for this is that most fanfic writers are motivated by praise. They’re not getting paid, after all: the praise is all the reward they get, so the praise had better be good. If I like a piece of fanfic, if I want more of it, I try to provide praise, and the more effusive the better.
I think most fanfic readers intuitively do this, and I worry that EY is taking comments like “HPMOR is the best thing I ever read!!!” literally, when a lot of that sort of stuff is just characteristically enthusiastic fan-feedback. (I’m willing to accept that JohnWittle means it literally, although, seriously? You’d trade Shakespeare and James Joyce—Neil Gaiman and Tolkien and Ursula K. Le Guin—for HPMOR? It’s pretty hard for me to wrap my head around that.)
But in general, the reasons that I don’t think HPMOR is as good as TWC have to do not with sentence-level construction but with plot momentum, tightness of theme, efficiency/consistency of characterization etc. It’s obviously not really “fair” to critique HPMOR on these grounds since we’re mostly seeing stuff that EY is posting as he completes, rather than a revised and polished final version, and because HPMOR is huge and rambling while TWC is a short story. But I was impressed that TWC had such focus, consistency, and drive because it’s something that I’ve felt lacking in HPMOR.
I’m speaking generally but that’s as critical as I want to get. I really don’t want to trash HPMOR—it’s just the “best thing in all of literature” comments that make me boggle a little. What I actually wanted to do was praise TWC, which I think is a truly excellent story.
Shakespeare I would trade for those weeks of my highschool life back, to spend on learning something more valuable.
James Joyce is an author I have heard of and have an intuition that I would experience social pressure against me if I did not assign him high status. From the reviews I read of Ulysses I would pay money to not have to read it. I don’t object to other people reading it or enjoying the sophistication.
Tolkien’s stories I would trade for MoR. His stories are rather dull. I wouldn’t trade his world or, especially, the overwhelming influence he had on fantasy fiction in general and elves in particular.
Neil Gaiman’s work I would trade, but reluctantly. I enjoyed Stardust. But Gaiman’s work is more typical and substitutes more easily found. Extreme Rational characters and worlds are overwhelmingly rare.
Ursula K. Le Guin? Haven’t read. Is her work closer in style and significance to Joyce, Shakespeare, Gaiman or MoR? If one of the last two I’d add her to my to read list.
I’m not sure “You’d trade?” is the right comparison to make. Perhaps “you would assign higher status to” or “you believe is more sophisticated and polished artwork” would give you the answer desired.
Le Guin is a death worshipper. The major theme of the Earthsea is the folly of the quest for immortality or even survival, and the naturalness of death.
Thankyou. That is the kind of attitude that at times makes me abandon a book in disgust. If I don’t identify with the goals or decisions of the protagonist I tend to be either disinterested in or repulsed by the work. I’ll avoid the author.
I agree about the deathism of Earthsea. And it has other faults, such as the fourth volume (Tehanu) being her turning against (although not entirely) the misogyny of the whole setup of the first three, and with the zeal of the newly enlightened retconning “men evil, women good” onto it. Always Coming Home is full of fluffy woo.
But she also wrote the short story The Ones Who Walk Away From Omelas, which is worth finding, because it’s about a standard utilitarian problem. I’m sure some philosopher posed it in exactly the form in which her story presents it, but I’ve not been able to track that down. Imagine a utopia — whatever utopia you like — except that it must be sustained by the suffering of a little girl confined in a cell and tortured for ever. It is part of the thought experiment that the utopia and the suffering are necessarily connected: the little girl can only be freed at the cost of ending the utopia. It is alluded to in HPMOR.
My subversive interpretation of Omelas is that the kid whose suffering the good of the entire place depends on is a taxpayer. We tax the kid in suffering, and we use the suffering to buy prosperity for everyone else. Of course, this is a tax which hurts people unequally (the kid: a lot, everyone else: not at all), but even conventional taxes can’t make everyone better off, and this is especially so for taxes that are intended for redistributing wealth.
What makes this interpretation subversive, of course, is that the very same people who talk about how we should consider how our actions affect others are generally the biggest proponents of taxation and wealth redistribution. I’m pretty certain that LeGuin isn’t a libertarian; you’re supposed to read the story and conclude that you’re victimizing others and that you have obligations towards others—not that you’re the victim.and other people have obligations towards you.
As I recall, the especially miserable but obligatory afterlife in the first three books got revised in the last (fifth?) book. The initial state turned out to be a magical working which seemed like a good idea at the time. Anyone remember the details?
I don’t agree with your characterization. I would say that the major theme of the first book is attaining self-knowledge, while the major theme of the second and fourth books is overcoming abuse.
The major theme of the third book is confronting mortality. In that book the land of the dead is portrayed as a terrible place, and the heroes of the book struggle with everything they have and are to escape it. But it’s true that there’s a villain whose quest for immortality is portrayed as selfish and dangerous.
The major theme of the fifth and final book is looking outside the self and understanding others. There’s some business with the land of the dead involved in this one too, but there’s an answer given that I don’t think boils down to death-worship.
Are you making an argument for aesthetic Stalinism?
Whether a work of art or literature is good is not necessarily related to whether it conveys lessons one agrees with.
No, quite clearly not. That being the case it is disingenuous to ask for rhetorical purposes.
Not necessarily, but it is a particularly strong reason. If a piece of fiction has the inferred purpose of conveying a lesson and that lesson is a bad lesson then the value of the piece of fiction could easily be negative. This is different to a non-fiction work that accurately conveys reality. Reality isn’t something that we get to choose, lessons and values are.
I was asking it ingenuously and straightforwardly, actually.
HPMOR is clearly didactic in this way; it’s not at all clear to me that Le Guin’s writing is (with the exception of Omelas).
I thought that A Wizard of Earthsea was a good counterpoint to a lot of the other fantasy books that I read as a child; I think that MoR!Harry would have made a few less mistakes if he had read and grokked it.
Le Guin is a genre writer, like Tolkien and Gaiman. For the most part she’s not stylistically difficult as Shakespeare or Joyce can be, although Always Coming Home is experimental in form.
I think she’s wonderful. Her Earthsea books (A Wizard of Earthsea is the first) are a good accessible jumping-on point if you’re interested in checking her out. Or The Dispossessed if you prefer science fiction. Or one of her short story collections, maybe The Wind’s Twelve Quarters or The Compass Rose.
I would only advise you to stay away from The Left Hand of Darkness—that one won both the Hugo and the Nebula and is the book of hers most likely to be taught in college courses, but she regards it as something of a failed experiment and personally I tend to agree.
I have absolutely no idea why this is a response to my comment; it seems entirely unrelated.
Just to be clear, I am not defending the quality of HPMoR, and I’m fairly certain I’ve never done so, though I’ve recommended it to several friends to whom I think it would appeal… not for the adequacy of its writing, but for the rarity and audience-appropriateness of its themes.
Misclick on my part. I meant to reply to JohnWittle (the grandparent of your comment). Sorry for the confusion!
Misclick on my part. I meant to reply to JohnWittle (the grandparent of your comment). Sorry for the confusion!
Interesting choices to represent better literature.
Personally, I think James Joyce’s work is the Sokal hoax of highbrow literature, but YMMV. (I’m not kidding.)
I don’t think you’re kidding, but my response to this will vary depending on whether you have made an honest effort to read Joyce. Have you actually sat down with any of his books? Which ones, and how long did you give it?
Personally, I feel that Ulysses delivered one of the single most transporting experiences I’ve ever had as a reader. However, the book is deliberately hard in places. It’s kind of like “The Neverending Story”—Joyce is writing about The Hero’s Journey but he aims to make you, the reader, experience that journey on a visceral level along with the protagonist of the book. So when things are hardest for the protagonist, the book also becomes difficult to decode and to read.
My opinion is that this trick pays off in the end, when I as a reader experienced a sense of relief and homecoming just as the protagonist did. The last line of Ulysses can be endlessly quoted (“yes I said yes I will Yes”) but the sweetness and the power of it is something that has to be experienced, by going on the journey.
Fanfiction inherently limits the number of people who will ever look at it; an independent work stands on its own merits, but a fanfiction stands on both its own merits and the merits of the continuity to which it is attached. Write the best fanfic ever about Harry Potter, and most people still will never read it because your audience is restricted to Harry Potter fans who read fanfiction—a doubly restricted group.
While it is undeniable that it can act to promote your material, you are forever constrained in audience size by the above factors, as well as the composition of said audience by said people who consume fanfiction of fandom X.
I agree that fanfic has a lower ceiling than original work. But it isn’t necessarily better to raise my ceiling than to raise my average.
Write an original work, and unless you are both very lucky and very good, the number of people who see it is more or less zero.
If you write an original work, then I am very sorry, but I probably will not read it. There is a barrier to diving into a new world, a trivial inconvenience, but nonetheless, a cost to high for the expected return, which by Sturgeon’s Law is near zero. On the other hand, in fanfiction I already know the world, and that makes it easier to jump in.
Yes, for fanfiction there is an upper bound to the readership numbers, but in practice, that isn’t what you should be worrying about when trying to get people to read your work. The hard part is separating yourself out from the Sturgeon’s Law chaff surrounding you, and that is an easier task if your work is a work of fanfiction.
There’s quite a number of HPMOR readers who’ve never read HP. Admittedly this may be a special case, and it’s not HPMOR’s original intended optimal use-case either (reading Philosopher’s Stone first is a good idea if you can).
I tried the original after HPMOR, and it reads like mediocre fanfic :) Harry is just all wrong...
I was a lifelong HP fan before reading HPMOR—and I would almost certainly never have read it if it wasn’t HP fanfiction. (Or popular on TVtropes, but that’s another matter.)
I only decided to watch the movies after I read an early version of HPMOR.
The second factor is much more important for most authors for most stories. I read a lot of fanfiction by people whose original works I never would have found, because their original works aren’t stored in a fanfiction repository. It’s like how you could go to DeviantArt and look at people’s original works, but you’re much more likely to come across drawings they’ve done of things you’re both fans of.
Worrying that you are forever constrained in audience size seems odd; most people never read most stories. The question is how many you can get to read it, and when.
Using another rationalist fanfic as an illustration I’ve read Luminosity, but never twilight.
Woah, I never though of it like that before.
We should be writing crossovers!
What have I done? ; ;
Relevant link is relevant:
The Finale of the Ultimate Meta Mega Crossover
I thought crossovers only appeal to fans of both works, and hence that works the opposite of the way you thought it would?
Huh. Now that you mention it, maybe they do. I’ve certainly read crossovers of series I don’t read, but...
I think I’d expect an S-shaped curve for fanfic, with a term for the popularity of the original work, and a more exponential-looking curve for original fiction. People who read fanfic tend to read a lot of fanfic, and that gets a certain number of eyes on your work that wouldn’t be there if you were publishing original fiction, but it’s exceptionally rare for a fanfic to attract readers that aren’t either part of the (still relatively small) fanfic community or fans of the original work and usually both.
HPMoR is unusual in that it has managed to attract an audience independent of those considerations, but that audience is, as best I can tell, quite small compared to the numbers a bestselling original fantasy novel can be looking at.
Oh, without question. A bestselling original fantasy novel has many more readers than a popular fanfiction. Agreed.
So now, if I want to do an expected value calculation, I should consider the likelihood of my work becoming a bestselling original fantasy novel vs the likelihood of my work becoming popular fanfiction, and the effort involved in pursuing those paths, and cash those out in terms of expected readers gained per unit of work. Agreed?
Assuming we agree on that: what would you estimate the ratio of those numbers to be for EY, while preserving the various ideological/educational purposes he had for his work?
Yeah, I think we agree on the problem statement. As to the solution, that’s an interesting question. Let’s do some Fermi analysis.
We first need to know what popular fanfic is actually looking at in terms of readers. ff.net doesn’t expose that information, unfortunately, but AO3 does. If a comment on AO3 is roughly equivalent to a review on ff.net, then each review is worth about a thousand hits. Assume that five or ten percent of those hits, for a long work, are unique readers (logged-in users aren’t double-counted, but I assume anonymice on dynamic IPs are when their IP changes), and it looks like Methods has seen one to two million readers.
Now, a popular fantasy novel series can be expected to sell somewhere around twenty million copies (there are few single-volume fantasy books that make that list). Assume five books per series and that half of all readers don’t get all the way through, and we’re looking at somewhere around eight million unique readers.
If EY is risk-neutral, if he mainly cares about maximizing his readership, and if non-bestselling books usually sell relatively few copies, he would have been correct in writing an original series if there was more than a 12% − 20% percent chance of making it into bestseller territory. That sounds high to me, so I’m going to say that the fanfic approach was probably a good one—although it’s worth mentioning that Methods is arguably as much an outlier in fanfic terms as your average bestseller is in publishing, and retroactively extrapolating from its performance might not accurately model the kind of forward prediction that Eliezer would actually have been doing when he was making these choices. Also, if he happens to have any contacts in the publishing industry, that’d skew things somewhat towards original fic—not all the barriers to entry are based on dumb luck.
I think it pretty obviously does not accurately represent the choice Eliezer faced at the time. Aside from the inherent advantages of fanfiction (no barriers to entry for him or readers, close interaction with readers so he can debug chapters), there are who knows how many thousands of high-quality fanfics he was competing with? It’s worth noting that Eliezer has done a fair bit of original and fanfiction before (http://yudkowsky.net/other/fiction) and AFAIK none of them have been wildly successful even when you consider them being short stories etc.
The original work was the Sequences. It’s great. But every time we try to get people to read it, they look at it and think “ugh, i really don’t want to read a really long blob of nonfiction. isn’t there something easier?”
hence, HPMOR the fanfiction. It was pretty successful at its job.
I’ve not read it, but I’m given to understand that it has more fucking in it than HPMOR.
Also, I’m under the impression that people read it out of curiosity because it’s famous for having lots of fucking in it, but when they’ve read it they don’t think it’s actually that great. (I haven’t read it either, and I’m not going to read it in the foreseeable future either.)
It’s the first book I’ve ever had to put down after a few pages because the writing was so very awful.
50 shades of gray WAS a twilight fanfiction. After it got adapted, it didn’t have twilight in it. I don’t think you could adapt HPMoR for non copyright infringing publication without ruining it.
All else being equal this is evidence that it is not publishable.
I feel like I’m missing out on most of the clever subtleties.
Is there a way to tell?
I’m also impressed, but not quite as impressed as I should be, for reasons just mentioned.
I actually liked it. Here are my reasons:
I’m comfortable with the setting (I’ll call it “philosophy in space”)
The three worlds are intelligently designed (forgive the pun) and fit nicely together (from a dramatic point of view)
The stereotypical characters each play a nicely defined and important role
The main characters (Akon and the Confessor) develop nicely
There are two endings (I’m a sucker for choice :3 )
The differences between the future world and our present world are noticable and important to the plot (and not just decoration)
Oh, and a good friend of mine recommended reading it. But that’s not a real reason, is it. ;-)
That sounds like it is a big part of the real reason. Just not part of the justification. :)
Damn you. ;-)
@Eliezer well that’s to me looks very dystopian future… the principle of protection of human autonomy was tossed away already when they legalize ‘non consensual’ anything where it is legal for human being A to impose his notions of fun on human being B against human being B’s will. So the mankind got raped very gently by superhappy—so what? They legalized this already, there’s a legal precedent for a far worse case that in your universe everyone agreed on.
Plus you give zero thought to concept of human beings as autonomous, each an agent upon himself. What I have immediately thought of is that I would have told the superhappy that due to communication bottleneck and biodiversity humans do not share identical values, have massively different neural wirings and as such they would have to integrate the values of each human personally etc. Not three worlds colliding but hundreds billions. I’d tell that the human neural system is so wired in many of the people that absence of pain would lead to diminished pleasure or failure to achieve sentience. I’d have told them the truth that facing such choice or having alteration forced on you results in extreme psychological pain that can disable and/or destroy some of the individuals. That we see gross modification of existing individuals as death. Etc.
Plus, of course, the fair ‘superhappy’ aliens are extremely vulnerable to mechanisms that homo sapiens has evolved to take advantage of naivity. You take some clinical psychopath, the smooth talking one (a corporate executive will do fine), get him to talk with the superhappy, and they’re toast. The 30x thought speed advantage won’t save them. The tech won’t save them. They’ll get owned by first nigerian prince. We have evolved for cognitive predator-prey interactions; 99% of the bias [as in , vast majority of, not as in exact number] that you’re overcoming here is the heuristics for those interactions, for surviving a smarter enemy who would convince you into doing something bad with arguments that are only subtly flawed, akin to proof that 1=2. Where do you think the bias and reluctance to accept reason comes from? Well guess what 99% of reason out there is predator’s blinky lights trying to subvert neural system of the prey, as in some squid versus some shrimp. The shrimp learnt to close the eyes. Reader, once you take into your mind what I just said there, even to parse what I said, you have to effectively execute truing-complete code in your head. In wetware with no process boundaries. The rationalism is inherently vulnerable.
They’re entirely toothless however, with no concept of biting and no carapace. Ditto for the babyeaters, really, sending off all their data, sending human data to the superhappy, and otherwise acting like total naive idiots instead of waging the cognitive warfare that humanity has perfected in the course of probably more than a million years. I say, in such an interaction, both of those alien races would get massively, massively exploited by humanity.
Non-consensual conversation is legal and socially approved. A society where non-consensual anything is illegal would look very interesting—explicit mentions of kind of interaction you’re open to, long escalation of extremely subtle signals, people mostly ignoring each other all the time, ubiquitous go-betweens—but hardly the only non-dystopian one.
Meh. There are some differences, but not nearly as big as between two random minds.
Yes, but it’s not that huge. It’s a rather isolated preference change.
Bwahaha.
Can’t see why. They understand treating language as a vending machine—vibrations go in, behaviors come out. The sounds “r-i-ch-b-a-n-k-er” need not be evidence of a person’s finances. They didn’t evolve for the same kinds of competition, but they have a concept of non-truthtelling. So I don’t understand where you’re coming from here.
The story is speaking of “non-consensual sex”, the illegal kind (rape), that was legalized. Great many actions are deemed illegal without consent as to protect autonomy of humans from other humans; when you start legalizing those actions, you drop off the autonomy. Especially major things.
Conversation is not illegal and thus can’t be “legalized”. Also, try having conversation with someone against their will, or when they are obviously busy. It is deemed impolite, and is not illegal simply because it doesn’t hurt too much—if you distract someone causing an injury you might very well get in trouble.
What the hell is a random mind, a Boltzmann brain? See http://lesswrong.com/lw/dr/generalizing_from_one_example/
In context of the story—clearly some people would embrace superhappy and some would commit suicide at the thought. Sounds significant enough to me. Hell, the humans are in reality more diverse in their views than the babyeaters and superhappy are in the story.
The story itself—the aliens act far too naive and in ways that are too exploitable and imply lack of understanding of untruth. The humans as well, though. That’s because this whole rationalism thing gets really messy and complicated when you start being rational about what you tell. In particular, superhappy gone into nearly shock state (lost part of crew!) over something that the babyeaters told them, without slightest thought as to the possibility that the babyeaters could perhaps have engineered an input to the superhappy which would damage the superhappy. (which is precisely what happened, except the creator of the story had engineered what the babyeaters tell as to be shocking)
Even more than this, the superhappy, despite being in position of power, are going for some supposedly fair 1⁄3 1⁄3 1⁄3 thing where everyone adjusts. Frankly it makes absolutely no sense and is not in the slightest rational, plus is clearly based on some failed logic and as such prone to manipulation (like every single human treated separately and they all dissolve).
I’ve read this about half a year ago, enjoyed it, completely more or less agreed with Eliezer’s point and filed it away.
Then, this morning, I literally woke up screaming. This is not an exaggeration, I must’ve dreamt of something that reminded me of 3WC, and my first waking thought was: “It’s WRONG to be right!”. I do believe that the human condition and human individuality are easily worth practically any number of lives (although holding ourselves hostage and threatening to voluntarily increase the amount of suffering customary for human culture unless the Superhappies give all people a choice in the matter might have worked as a third option—but wriggling out of the author’s intent is pointless). I don’t have a single problem with this logic.
What I have a problem with is myself. I was born with some brain damage (diagnosed only at 19, unfortunately for my teenage years) that, among other socially inconvenient things, strongly inhibits my instinctive empathy; I might value and respect individual people, but can feel very little compassion for them on a personal level, and I wouldn’t hesitate in murdering someone if I believed it was right and necessary. In short, I exhibit traits of an actual sociopath. So I could see myself jumping at the decision, carrying it out and suffering from zero irrational guilt.
That caused a rebellion of sorts inside me. Suddenly I contemplated writing something really, really stupid, sending Eliezer a death threat, hated the thought of becoming transhuman or ever having to deal with a real Hard Choice, devoting the rest of my life to opposing, attacking, slandering and scaremongering against everything that Less Wrong and SIAI stand for. After about two hours it burned out, and now I feel more or less in control. I’m quite puzzled as to what the bloody hell that was. “Fear of having to grow up again” probably comes close.
I can’t name a good reason for posting all this, except for suggesting that strong moral biases could shift into self-defense mode during a Hard Choice scenario, the very moment one would make an honest effort to examine and prioritize his values. Your beliefs could just shut everything down along with themselves to avoid being changed.
(as an aside, with all the shout-outs, it’d be cool if it chapter 8 was called “One More Final”, as it has quite a few parallels with The End of Evangelion and its final scene),
Oh, to anyone who agrees with the decision but is still disturbed/looking for a 3rd option due2 those specific victims: THEY DIDN’T DIE AND WERE IN LITTLE DANGER, Eliezer told us an implausible lie to make us think. In fact, the ship was a flotillia and it sent a runner home for each developement, AND they didn’t settle 15b people in a frontier system—because people had read previous centuries’ good SF and heeded its warnings. Same goes for every scenario with simple precautions or hidden third options.
The nature of Alderson lines, as described, means that every system is a frontier system.
Ah. I skipped that bit. Thanks.
I’m a nervous, anxious, karma-whoring noob, that’s why I retracted it after a downvote arrived within 5 minutes of posting. Would anyone please explain the downvote, so that I know why I shouldn’t write statements like this one?
I wasn’t the one downvoting (I didn’t catch you before you retracted), but you’re using numbers and characters to write words (“2” instead of “to”, “15b” instead of “15 billion’ --- I don’t care if it’s faster for you to write, it makes it harder for a hundred other people to read it), you’re discussing a piece of fiction as if it was a reality that the author “lied” about, and you seem to be thinking that forum members are so emotionally frail that they need to delude themselves.
I’m downvoting this one, seems you seem to be abusing the “retract” system just to preserve your karma, not because you actually thought your previous comment ought have been retracted.
Sorry, I was posting that from my phone, and had to squeeze everything in the 512 character limit; didn’t bother to edit it from the PC.
Well, I happen to be very emotionally frail myself (although 95% of the time I repress my emotions strongly) and, seeing as some of the fine folks here also have various personality disorders, I wanted to assist them with the emotional anguish that I knew they were facing.
Eh heh, absolutely right. If that’s how the system works, people will whore for karma, and nothing can be done about that. I don’t feel guilty in the slightest :)
That’s not acceptable behavior here, and will generally get you downvoted no matter how good your excuse is. (Well, okay, hypothetically speaking there’s probably a sufficiently good excuse. But given that you’re not writing from some ridiculously oppressive country where you have to stick to a 512 character limit in order for your message to have a reasonable chance of getting out of that country undetected rather than triggering your internet connection getting cut, do it right.)
This… may not be the right forum for you. We’re generally trying to go in the opposite direction from what you just described—making ourselves strong enough to deal with difficult truths, not figuring out ways to avoid those truths to stay comfortable.
Downvoting can be done about it. In a more abstract sense, status-based punishment can be done about it, too. You’re not establishing a very good reputation, that way.
But I am trying to become stronger. I was hurt by changing my mind, and did value from it; afterwards, the hurt stopped being productive and I tried to mitigate it.
I could go about it in a sneaky way, or openly; which one do you think is worse? If it looks like a game, especially one of status and prestige, people are going to play it; the system should just tax such behavior with making the occasional genuinely valuable contribution essential for “selfish” strategies.
I have converted the book to epub and mobi. Download link: http://www.filedropper.com/3worlds
I haven’t started reading it yet so let me know if you find any problems with the conversion.
Enjoy!
links are dead!
It seems beautiful, so far. Thanks!
This was an interesting thing to read about, though I have to say the start, with the baby eaters, was the best and most interesting part. The babyeaters were, by human standards, unconscionably evil, but ironically, were actually probably much less so than the happy-happy. Indeed, the sad irony was that the happy-happy were far, far less capable of understanding humanity than the babyeaters were—and I think that the humans could have found peace with the babyeaters. But the happy-happy lack what the other two races possess.
The sad thing is that humans could probably beat the happy-happy rather easily, though. The happy-happy were horrified to the point of nearly being broken by the babyeaters. Humans are capable of coming up with far, far worse things than the babyeaters. If you were to give that to them, wield it as a weapon, you could potentially make a basilisk—and if sufficiently clever in its design, it might well completely annihilate them due to their culture’s inflexibility. Indeed, it is obvious that despite what they claimed, pain was not truly something that was so completely alien to them—they did not experience discomfort in the same way that humans did, but it was clear that they DID experience such things, as they had difficulty coping with what the babyeaters fed them and retreated to their happy fun time chamber “as a reward”.
Of course it would be insanely dangerous, but they hardly had many good options, did they? The other problem is that leaving the happy-happy be could potentially expose other races similar to humanity to them.
Sadly humanity never had the chance to get anything from the happy-happy; the poor babyeaters ruined that for them. It might have helped.
I’ve created a new EPUB as the earlier link was dead. This one has a table of contents and improved typography. You can download it from my Dropbox (68kb).
This link also appears to be dead. Where can an epub or Kindle compatible format be found?
Looks like someone ePub-ified it here .
I asked a friend of mine to read the story. He’s a reincarnationist and he liked it a lot, although he preferred the first ending to the second. He sent me an interesting commentary on the reasons for this preference, which I’m copying and pasting below. I guess the few reincarnationist observations he made won’t be of much interest to most here, but the other considerations are very well worth the reading:
An interesting ethical exercise. It seems to me that it would benefit from cutting some slack, such as the entire pseudoreligious Confessor’s line (I understand he’s one of the more alive protagonists, but hey, it’s largely drama out of nothing) and the superfluous “markets” (I understand the author is fond of prediction markets, but here they add nothing to the story’s core, only distract). The core, on the other hand—the two alien races and their demands—is drama out of something and would do well with some elaboration.
For one thing, while the Babyeaters are pretty well established (and have some historic analog in the Holocaust, as mentioned in the story itself), the Superhappies look much more muddled to me. Why exactly should I be outraged by them? An overabundance of sex? Sorry, doesn’t work. Lord Acon is somehow disgusted to look at their true bodies, all slimy and tangled? Pardon me, have you looked at your own guts or brain? They’re pretty slimy too. Lack of humour and inability to lie? Well, that may be something to marvel at but hardly something to find morally unacceptable. I think the author missed a good opportunity here—he could have called the second aliens Babyfuckers (which they most likely are, it’s just not highlighted enough in the story) instead of the bland “Superhappies,” so that the humans’ moral outrage looks more justified—and the story’s premise becomes more nicely symmetric.
The only real reason to abhor the Superhappies does not appear until much later in the story when they reveal their plan to rework the human race. That, at least, is genuine conundrum. Is pain always bad? If not, when and why can it be good? If it’s only good because it helps us understand sufferings of others and therefore be altruistic, will pain become useless in a world where (sentient) others don’t suffer anymore? Where’s the line between improving someone and killing-and-recreating-from-scratch? Is this line drawn differently for the body and for the brain? There’s a lot to ponder.
Author’s non-solution of “run, you fools” (which is the same in both endings, only one ending’s escape is more successful than the other’s) is sad and silly, but at least it’s believable. We people are just like that, alas. We so want to improve the imperfect Others, yet we’re so horrified at the thought that some still more perfect Others may want to improve us. Today’s world is abrim with examples. Apparently centuries of mandated rationalism didn’t do much to change that in the crew of the Impossible.
But the biggest problem I have with this story is not with the specific solution the author offers; rather, it is with the conception of “solution” itself. I am not an ethical realist (or at least not an ethical naturalist), so I don’t believe ethical dilemmas work as mathematical puzzles where one answer is correct and all others are wrong. Ethics only exists within and between ethics-capable beings, and it only works via constant deliberation, negotiation, experimentation, building up trust. It’s slow, it’s painful, it’s highly uncertain, but that’s how live ethics works in real life. Going to space or conversing with aliens will hardly change that; if history shows us anything, it’s that the more advanced a culture has become, the less likely it is to speak in ultimatums. So, I think I reject the very premise of this story; something vaguely like that may happen, but in real life it would be much less drastic, much more boring (in general, real life is more boring than fiction), with lots and lots of openings for compromise that all three parties will try to exploit. It doesn’t mean the end result would be rosy and mutually satisfying; it may well happen that some of the civilizations will gobble up, or transform unrecognizably, others. But that’s not going to happen overnight, and it’s not something you can ensure or avoid too far in advance. Rationalism helps you think but it can’t make the world completely predictable.
Actually I am nearly completely on the side of the Super Happies on this one. It is not as if the humans are moral with rape legalized. I’d support the Happies provided: 1. The utility function will not diverge from the goals of a) spreading truth, and eliminating delusion, b) spreading happiness, and eliminating suffering, c) growing, and not dying. With the negative statements taking precedence over the positive ones. 2. That accordingly the babies they will eat to accommodate the Baby Eaters are not only not sentient, but also incapable of suffering of any kind. I.e. have the moral status of a rock. 3. That a negative behaviour feedback is available for things like putting hands on stoves. A painless equivalent needs to be available to prevent anti-utility behaviour.
So, in short I kind of have to admit I dislike both endings.
I think an important part of what makes their ending so terrifying is that you don’t get to make those stipulations. Or any other stipulations. The Superhappies may or may not follow them, that’s their choice—you just don’t get any say, one way or the other.
Indeed, all situations in which one is powerless share that I think. It makes it frightful, but does it make it wrong? I don’t necessarily think it does. Assuming for the moment that those stipulations stand (for reasons not necessarily related to the story) is pain abolitionism really as bad as it seemingly is represented in the story?
I agree that killing billions on the off chance that the Superhappies won’t find you is a horrific gamble. This is the sort of behavior we find in super villains in all sorts of fictional stories. That the ones making the choice sacrificed their own lives does not make it better. Atonement? Yeah, maybe he enjoyed every minute of it the same way he did torturing and raping a girl to death. Maybe that is what the laughter was really about.
Read Xenogenesis by Octavia Butler. It is a better story. We need to evolve and change. We don’t get to refuse evolution. That is a dead end path. There are singularities. That is reality.