ProjectLawful.com: Eliezer’s latest story, past 1M words
So if you read Harry Potter and the Methods of Rationality, and thought...
“You know, HPMOR is pretty good so far as it goes; but Harry is much too cautious and doesn’t have nearly enough manic momentum, his rationality lectures aren’t long enough, and all of his personal relationships are way way way too healthy.”
...then have I got the story for you! Planecrash aka Project Lawful aka Mad Investor Chaos and the Woman of Asmodeus, is a story in roleplay-format that I as “Iarwain” am cowriting with Lintamande, now past 1,000,000 words.
It’s the story of Keltham, from the world of dath ilan; a place of high scientific achievement but rather innocent in some ways. For mysterious reasons they’ve screened off their own past, and very few now know what their prescientific history was like.
Keltham dies in a plane crash and ends up in the country of Cheliax, whose god is “Asmodeus”, whose alignment is “Lawful Evil” and whose people usually go to the afterlife of “Hell”.
And so, like most dath ilani would, in that position, Keltham sets out to bring the industrial and scientific revolutions to his new planet! Starting with Cheliax!
(Keltham’s new friends may not have been entirely frank with him about exactly what Asmodeus wants, what Evil really is, or what sort of place Hell is.)
This is not a story for kids, even less so than HPMOR. There is romance, there is sex, there are deliberately bad kink practices whose explicit purpose is to get people to actually hurt somebody else so that they’ll end up damned to Hell, and also there’s math.
The starting point is Book 1, Mad Investor Chaos and the Woman of Asmodeus. I suggest logging into ProjectLawful.com with Google, or creating an email login, in order to track where you are inside the story.
Please avoid story spoilers in the comments, especially ones without spoiler protection; this is not meant as an “ask Eliezer things about MICWOA” thread.
- The case for turning glowfic into Sequences by 27 Apr 2022 6:58 UTC; 86 points) (
- AI Safety is Dropping the Ball on Clown Attacks by 22 Oct 2023 20:09 UTC; 64 points) (
- Voting Results for the 2022 Review by 2 Feb 2024 20:34 UTC; 57 points) (
- How to Give in to Threats (without incentivizing them) by 12 Sep 2024 15:55 UTC; 52 points) (
- Intelligence Enhancement (Monthly Thread) 13 Oct 2023 by 13 Oct 2023 17:28 UTC; 49 points) (
- Concrete positive visions for a future without AGI by 8 Nov 2023 3:12 UTC; 41 points) (
- Project Lawful Audiobook: An Unofficial Fan Production with ElevenLabs AI by 19 Jul 2023 23:34 UTC; 22 points) (
- Incorporating Mechanism Design Into Decision Theory by 26 Jan 2024 18:25 UTC; 17 points) (
- Reframing Acausal Trolling as Acausal Patronage by 23 Jan 2024 3:04 UTC; 14 points) (
- Legibility Makes Logical Line-Of-Sight Transitive by 19 Jan 2024 23:39 UTC; 13 points) (
- Incorporating Justice Theory into Decision Theory by 21 Jan 2024 19:17 UTC; 13 points) (
- Planecrash Podcast by 3 Apr 2023 4:34 UTC; 10 points) (
- 10 Jun 2023 15:32 UTC; 7 points) 's comment on EY in the New York Times by (
- 20 Jan 2024 0:47 UTC; 7 points) 's comment on On “Geeks, MOPs, and Sociopaths” by (
- 4 Dec 2024 17:45 UTC; 5 points) 's comment on The Hidden Complexity of Wishes by (
- 19 Jul 2023 5:52 UTC; 4 points) 's comment on Predictive history classes by (
- Sofia ACX June 2023 Regular Meetup by 4 Jun 2023 15:55 UTC; 2 points) (
- 1 Sep 2023 3:32 UTC; 2 points) 's comment on Meta Questions about Metaphilosophy by (
- 17 Oct 2023 15:16 UTC; 2 points) 's comment on The Good Life in the face of the apocalypse by (
- 21 Dec 2022 21:34 UTC; 1 point) 's comment on How Death Feels by (
- 6 Nov 2022 0:59 UTC; 1 point) 's comment on Counterarguments to the basic AI x-risk case by (
- 23 Jan 2024 0:28 UTC; 1 point) 's comment on Incorporating Justice Theory into Decision Theory by (
- 25 Jan 2024 4:12 UTC; 0 points) 's comment on Open Thread – Winter 2023/2024 by (
- AI Safety is Dropping the Ball on Clown Attacks by 21 Oct 2023 23:15 UTC; -17 points) (EA Forum;
I feel like Project Lawful, as well as many of Lintamande’s other glowfic since then, have given me a whole lot deeper an understanding of… a collection of virtues including honor, honesty, trustworthiness, etc, which I now mostly think of collectively as “Law”.
I think this has been pretty valuable for me on an intellectual level—I think, if you show me some sort of deontological rule, I’m going to give a better account of why/whether it’s a good idea to follow it than I would have before I read any glowfic.
It’s difficult for me to separate how much of that is due to Project Lawful in particular, because ultimately I’ve just read a large body of work which all had some amount of training data showing a particular sort of thought pattern which I’ve since learned. But I think this particular fragment of the rationalist community has given me some valuable new ideas, and it’d be great to figure out a good way of acknowledging that.
I don’t think this would fit into the 2022 review. Project Lawful has been quite influential, but I find it hard to imagine a way its impact could be included in a best-of.
Including this post in particular strikes me as misguided, as it contains none of the interesting ideas and lessons from Project Lawful, and thus doesn’t make any intellectual progress.
One could try to do the distillation of finding particularly interesting or enlightening passages from the text, but that would be
A huge amount of work[1], but maybe David Udell’s sequence could be used for that.
Quite difficult for the more subtle lessons, which are interwoven in the text.
I have nothing against Project Lawful in particular[2], but I think that including this post would be misguided, and including passages from Project Lawful would be quite difficult.
For that reason, I’m giving this a −1.
Consider: after more than two years the Hanson compilation bounty still hasn’t been fulfilled, at $10k reward!
I’ve read parts of it (maybe 15%?), but haven’t been hooked, and everytime I read a longer part I get the urge to go and read textbooks instead.
Reading Project Lawful (so far, which is the majority of Book 1) has given me a strong mental pointer to the question of “how to model a civilization that you find yourself in” and “what questions to ask when trying to improve it and fix it”, from a baseline of not really having a pointer to this at all (I have only lived in one civilization and I’ve not been dropped into a new one before). I would do many things differently to Keltham (I suspect I’d build prediction markets before trying to scale up building roads) but it’s nonetheless extremely valuable to read someone’s attempt at this.
The thing I dislike most about it is that every interaction is suffused with highly adversarial deceptive analysis. I find this pretty hard to do in real life and kind of distasteful and is not a skill I aspire to have. I understand Keltham finds himself in a highly adversarial environment, but I still don’t like it.
I really wish it had chapters or similar units of chunking. I bounced off like 4 times before being able to read this book, having to learn the practice of “this is about enough reading for now / this is probably a good place to stop” which most books help me with themselves.
Overall +9 as an assessment of the quality of the contribution, though I agree that I’m not quite sure how this would fit in the review. Perhaps we could just include a select few pages of it for flavor then link to the website.
In his dialogue Deconfusing Some Core X-risk Problems, Max H writes:
This is the “glowfic” he was referring to.
I’m very sympathetic to Niplav and Max H’s concerns that it’s just way too long. However, I disagree with their thinking. The burden of the length has actually fallen much harder on me than most readers, and in spite of that, I still think that projectlawful is worth the read.
The lessons from projectlawful are a package deal; if you want some, you have to read the rest, and you have to watch examples characters trying the lessons and sometimes failing and sometimes succeeding, and you have to care. The use of examples to approach each problem from multiple angles is a key element of what made Yud’s original sequences great and what made the CFAR handbook effective.
Yudkowsky seems to have noticed that humans spend many hours a week reading fiction and zero hours a week becoming more rational, and that people weren’t reading the sequences so much because it was “work”. I currently think that both projectlawful and HPMOR were the correct moves to fix these problems.
Thanks for the sort-of-response. My main point was that it’s tricky to include Project Lawful in the review, since the past three times those were fairly short (and small!) books. Long things are fine, but I was critical about the value of including all of Project Lawful in the review.
(that’s a quote from Max H, not from me)
whoops, corrected from habryka to Max H. That section was a bit unusual, this is not a normal mistake that I’m predisposed to making (this is an example of a mistake that I’m very predisposed to making).