Comment on “Death and the Gorgon”

(some plot spoilers)

There’s something distinctly uncomfortable about reading Greg Egan in the 2020s. Besides telling gripping tales with insightful commentary on the true nature of mind and existence, Egan stories written in the 1990s and set in the twenty-first century excelled at speculative worldbuilding, imagining what technological wonders might exist in the decades to come and how Society might adapt to them.

In contrast, “Death and the Gorgon”, published in the January/​February 2024 issue of Asimov’s, feels like it’s set twenty minutes into the future. The technologies on display are an AI assistant for police officers (capable of performing research tasks and carrying on conversation) and real-time synthetic avatars (good enough to pass as a video call with a real person). When these kinds of products showed up in “’90s Egan”—I think of Worth’s “pharm” custom drug dispenser in Distress (1995) or Maria’s “mask” for screening spam calls in Permutation City (1994)—it was part of the background setting of a more technologically advanced world than our own.

Reading “Gorgon” in 2024, not only do the depicted capabilities seem less out of reach (our language model assistants and deepfakes aren’t quite there yet, but don’t seem too far off), but their literary function has changed: much of the moral of “Gorgon” seems to be to chide people in the real world who are overly impressed by ChatGPT. Reality and Greg Egan are starting to meet in the middle.

Our story features Beth, a standard-issue Greg Egan protagonist[1] as a small-town Colorado sheriff investigating the suspicious destruction of a cryonics vault in an old mine: a naturally occurring cave-in seems unlikely, but it’s not clear who would have the motive to thaw (murder?) a hundred frozen heads.

Graciously tolerating the antics of her deputy, who is obsessed with the department’s trial version of (what is essentially) ChatGPT-for-law-enforcement, Beth proceeds to interview the next of kin, searching for a motive. She discovers that many of the cryopreserved heads were beneficiaries of a lottery for terminally ill patients in which the prize was free cyronic suspension. The lottery is run by OG—”Optimized Giving”—a charitable group concerned with risks affecting the future of humanity. As the investigation unfolds, Beth and a colleague at the FBI begin to suspect that the lottery is a front for a creative organized crime scheme: OG is recruiting terminal patients to act as assassins, carrying out hits in exchange for “winning” the lottery. (After which another mafia group destroyed the cryonics vault as retaliation.) Intrigue, action, and a cautionary moral ensue as our heroes make use of ChatGPT-for-law-enforcement to prove their theory and catch OG red-handed before more people get hurt.


So, cards on the table: this story spends a lot of wordcount satirizing a subculture that, unfortunately, I can’t credibly claim not to be a part of. “Optimized Giving” is clearly a spoof on the longtermist wing of Effective Altruism—and if I’m not happy about how the “Effective Altruism” brand ate my beloved rationalism over the 2010s, I don’t think anyone would deny the contiguous memetic legacy involving many of the same people. (Human subcultures are nested fractally; for the purposes of reviewing the story, it would benefit no one for me to to insist that Egan isn’t talking about me and my people, even if, from within the subculture, it looks like the OpenPhil people and the MIRI people and the Vassarites and … &c. are all totally different and in fact hate each other’s guts.)

I don’t want to be defensive, because I’m not loyal to the subculture, its leaders, or its institutions. In the story, Beth talks to a professor—think Émile Torres as a standard-issue Greg Egan character—who studies “apostates” from OG who are angry about “the hubris, the deception, and the waste of money.” That resonated with me a lot: I have a long dumb story to tell about hubris and deception, and the corrupting forces of money are probably a big part of the explanation for the rise and predictable perversion of Effective Altruism.

So if my commentary on Egan’s satire contains some criticism, it’s absolutely not because I think my ingroup is beyond reproach and doesn’t deserve to satirized. They (we) absolutely do. (I took joy in including a similar caricature in one of my own stories.) But if Egan’s satire doesn’t quite hit the mark of explaining exactly why the group is bad, it’s not an act of partisan loyalty for me to contribute my nuanced explanation of what I think it gets right and what it gets wrong. I’m not carrying water for the movement;[2] it’s just a topic that I happen to have a lot of information about.

Without calling it a fair portrayal, the OG of “Gorgon” isn’t a strawman conjured out of thin air; the correspondences to its real-world analogue are clear. When our heroine suspiciously observes that these soi-disant world-savers don’t seem to be spending anything on climate change and the Émile Torres–analogue tells her that OG don’t regard it as an existential threat, this is also true of real-world EA. When the Torres-analogue says that “OG view any delay in spreading humanity at as close to light-speed as possible as the equivalent of murdering all the people who won’t have a chance to exist in the future,” the argument isn’t a fictional parody; it’s a somewhat uncharitably phrased summary of Nick Bostrom’s “Astronomical Waste: The Opportunity Cost of Delayed Technological Development”. When the narrator describes some web forums as “interspers[ing] all their actual debunking of logical fallacies with much more tendentious claims, wrapped in cloaks of faux-objectivity” and being “especially prone to an abuse of probabilistic methods, where they pretended they could quantify both the likelihood and the potential harm for various implausible scenarios, and then treated the results of their calculations—built on numbers they’d plucked out of the air—as an unimpeachable basis for action”, one could quibble with the disparaging description of subjective probability, but you can tell which website is being alluded to.

The cryonics-as-murder-payment lottery fraud is fictional, of course, but I’m inclined to read it as artistically-licensed commentary on a strain of ends-justify-the-means thinking that does exist within EA. EA organizations don’t take money from the mob for facilitating contract killings, but they did take money from the largest financial fraud in history, which was explicitly founded as a means to make money for EA. (One could point out that the charitable beneficiaries of Sam Bankman-Fried’s largesse didn’t know that FTX wasn’t an honest business, but we have to assume that the same is true of OG in the story: only a few insiders would be running the contract murder operation, not the rank-and-file believers.)

While the depiction of OG in the story clearly shows familiarity with the source material, the satire feels somewhat lacking qua anti-EA advocacy insofar as it relies too much on mere dismissal rather than presenting clear counterarguments.[3] The effect of OG-related web forums on a vulnerable young person are described thus:

Super-intelligent AIs conquering the world; the whole Universe turning out to be a simulation; humanity annihilated by aliens because we failed to colonize the galaxy in time. Even if it was all just stale clichés from fifty-year-old science fiction, a bright teenager like Anna could have found some entertainment value analyzing the possibilities rigorously and puncturing the forums’ credulous consensus. But while she’d started out healthily skeptical, some combination of in-forum peer pressure, the phony gravitas of trillions of future deaths averted, and the corrosive effect of an endless barrage of inane slogans pimped up as profound insights—all taking the form “X is the mind-killer,” where X was pretty much anything that might challenge the delusions of the cult—seemed to have worn down her resistance in the end.

I absolutely agree that healthy skepticism is critical when evaluating ideas and that in-forum peer pressure and the gravitas of a cause (for any given set of peers and any given cause) are troubling sources of potential bias—and that just because a group pays lip service to the value of healthy skepticism and the dangers of peer pressure and gravitas, doesn’t mean the group’s culture isn’t still falling prey to the usual dysfunctions of groupthink. (As the inane slogan goes, “Every cause wants to be a cult.”)

That being said, however, ideas ultimately need to be judged on their merits, and the narration in this passage[4] isn’t giving the reader any counterarguments to the ideas being alluded to. (As Egan would know, science fiction authors having written about an idea does not make the idea false.) The clause about the whole Universe turning out to be a simulation is probably a reference to Bostrom’s simulation argument, which is a disjunctive, conditional claim: given some assumptions in the philosophy of mind and the theory of anthropic reasoning, then if future civilization could run simulations of its ancestors, then either they won’t want to, or we’re probably in one of the simulations (because there are more simulated than “real” histories). The clause about humanity being annihilated by failing to colonize the galaxy in time is probably a reference to Robin Hanson et al.’s grabby aliens thesis, that the Fermi paradox can be explained by a selection effect: there’s a relatively narrow range of parameters in which we would see signs of an expanding alien civilization in our skies without already having been engulfed by them.

No doubt many important criticisms could be made of Bostrom’s or Hanson’s work, perhaps by a bright teenager finding entertainment value in analyzing the possibilities rigorously. But there’s an important difference between having such a criticism[5] and merely asserting that it could exist. Speaking only to my own understanding, Hanson’s and Bostrom’s arguments both look reasonable to me? It’s certainly possible I’ve just been hoodwinked by the cult, but if so, the narrator of “Gorgon”’s snarky description isn’t helping me snap out of it.

It’s worth noting that despite the notability of Hanson’s and Bostrom’s work, in practice, I don’t see anyone in the subculture particularly worrying about losing out on galaxies due to competition with aliens—admittedly, because we’re worried about “super-intelligent AIs conquering the world” first.[6] About which, “Gorgon” ends on a line from Beth about “the epic struggle to make computers competent enough to help bring down the fools who believe that they’re going to be omnipotent.”

This is an odd take from the author[7] of multiple novels in which software minds engage in astronomical-scale engineering projects. Accepting the premise that institutional longtermist EA deserves condemnation for being goofy and a fraud: in condemning them, why single out as the characteristic belief of this despicable group, the idea that future AI could be really powerful?[8] Isn’t that at least credible? Even if you think people in the cult or who work at AI companies are liars or dupes, it’s harder to say that about eminent academics like Stuart Russell, Geoffrey Hinton, Yoshua Bengio, David Chalmers, and Daniel Dennett, who signed a statement affirming that “[m]itigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”[9]

Egan’s own work sometimes features artificial minds with goals at odds with their creator, as in “Steve Fever” (2007) or “Crystal Nights” (2008), and with substantial advantages over biological creatures: in Diaspora (1997), the polis citizens running at 800 times human speed were peace-loving, but surely could have glassed the fleshers in a war if they wanted to. If you believe that AI could be at odds with its creators and hold a competitive advantage, scenarios along the lines of “super-intelligent AIs conquering the world” should seem plausible rather than far-fetched—a natural phenomenon straightforwardly analogous to human empires conquering other countries, or humans dominating other animals.

Given so many shared premises, it’s puzzling to me why Egan seems to bear so much antipathy towards “us”,[10] rather than than regarding the subculture more coolly, as a loose amalgamation of people interested in many of the same topics as him, but having come to somewhat different beliefs. (Egan doesn’t seem to think human-level AI is at all close, nor that AI could be qualitatively superhumanly intelligent; an aside in Schild’s Ladder (2002) alludes to a fictional result that there’s nothing “above” general intelligence of the type humans have, modulo speed and memory.) He seems to expect the feeling to be mutual: when someone remarked on Twitter about finding it funny that the Less Wrong crowd likes his books, Egan replied, “Oh, I think they’ve noticed, but some of them still like the, err, ‘early, funny ones’ that predate the cult and hence devote no time to mocking it.”

Well, I can’t speak for anyone else, but personally, I like Egan’s later work, including “Death and the Gorgon.”[11] Why wouldn’t I? I am not so petty as to let my appreciation of well-written fiction be dulled by the incidental fact that I happen to disagree with some of the author’s views on artificial intelligence and a social group that I can’t credibly claim not to be a part of. That kind of dogmatism would be contrary to the ethos of humanism and clear thinking that I learned from reading Greg Egan and Less Wrong—an ethos that doesn’t endorse blind loyalty to every author or group you learned something from, but a discerning loyalty to whatever was good in what the author or group saw in our shared universe. I don’t know what the future holds in store for humanity. But whatever risks and opportunities nature may present, I think our odds are better for every thinking individual who tries to read widely and see more.[12]


  1. ↩︎

    Some people say that Greg Egan is bad at characterization. I think he just specializes in portraying reasonable people, who don’t have grotesque personality flaws to be the subject of “characterization.”

  2. ↩︎

    I do feel bad about the fraction of my recent writing output that consists of criticizing the movement—not because it’s disloyal, but because it’s boring. I keep telling myself that one of these years I’m going to have healed enough trauma to forget about these losers already and just read ArXiv papers. Until then, you get posts like this one.

  3. ↩︎

    On the other hand, one could argue that satire just isn’t the right medium for presenting counterarguments, which would take up a lot of wordcount without advancing the story. Not every written work can accomplish all goals! Maybe it’s fine for this story to make fun of the grandiose and cultish elements within longtermist EA (and there are a lot of them), with a critical evaluation of the ideas being left to other work. But insofar as the goal of “Gorgon” is to persuade readers that the ideas aren’t even worthy of consideration, I think that’s a mistake.

  4. ↩︎

    In critically examining this passage, I don’t want to suggest that “Gorgon”’s engagement with longtermist ideas is all snark and no substance. Earlier in the story, Beth compares OG believers “imagin[ing] that they’re in control of how much happiness there’ll be in the next trillion years” to a child’s fantasy of violating relativity by twirling a rope millions of miles long. That’s substantive: even if the future of humanity is very large, the claim that a nonprofit organization today is in a position to meaningfully affect it is surprising and should not be accepted uncritically on the basis of evocative storytelling about the astronomical stakes.

  5. ↩︎

    Which I think would get upvoted on this website if it were well done—certainly if it were written with the insight and rigor characteristic of a standard-issue Greg Egan protagonist.

  6. ↩︎

    Bostrom’s “Astronomical Waste” concludes that “The Chief Goal for Utilitarians Should Be to Reduce Existential Risk”: making sure colonization happens at all (by humanity or worthy rather than unworthy successors) is more important that making it happen faster.

  7. ↩︎

    In context, it seems reasonable to infer that Beth’s statement is author-endorsed, even if fictional characters do not in general represent the author’s views.

  8. ↩︎

    I’m construing “omnipotent” as rhetorical hyperbole; influential subcultural figures clarifying that no one thinks superintelligence will be able to break the laws of physics seems unlikely to be exculpatory in Egan’s eyes.

  9. ↩︎

    Okay, the drafting and circulation of the statement by Dan Hendrycks’s Center for AI Safety was arguably cult activity. (While Hendrycks has a PhD from UC Berkeley and co-pioneered the usage of a popular neural network activation function, he admits that his career focus on AI safety was influenced by the EA advice-counseling organization 80,000 hours. But Russell, Hinton, et al. did sign.

  10. ↩︎

    This isn’t the first time Egan has satirized the memetic lineage that became longtermist EA; Zendegi (2010) features negative portrayals of a character who blogs at overpoweringfalsehood.com (a reference to Overcoming Bias) and a Benign Superintelligence Bootstrap Project (a reference to what was then the Singularity Institute for Artificial Intelligence).

  11. ↩︎

    Okay, I should confess that I do treasure early Egan (Quarantine (1992)/​Permutation City (1994)/​Distress (1995)) more than later Egan, but not because they devote no time to mocking the cult. It’s because I’m not smart enough to properly appreciate all the alternate physics in, e.g., Schild’s Ladder (2002) or the Orthogonal trilogy (2011–2013).

  12. ↩︎

    Though we’re unlikely to get it, I’ve sometimes wished for a Greg Egan–Robin Hanson collaboration; I think Egan’s masterful understanding of the physical world and Hanson’s unsentimental analysis of the social world would complement each other well.