Introducing AI-Powered Audiobooks of Rational Fiction Classics
(ElevenLabs reading of this post:)
I’m excited to share a project I’ve been working on that I think many in the Lesswrong community will appreciate—converting some rational fiction into high-quality audiobooks using cutting-edge AI voice technology from ElevenLabs, under the name “Askwho Casts AI”.
The keystone of this project is an audiobook version of Planecrash (AKA Project Lawful), the epic glowfic authored by Eliezer Yudkowsky and Lintamande. Given the scope and scale of this work, with its large cast of characters, I’m using ElevenLabs to give each character their own distinct voice. It’s a labor of love to convert this audiobook version of this story, and I hope if anyone has bounced off it before, this might be a more accessible version.
Alongside Planecrash, I’m also working on audiobook versions of two other rational fiction favorites:
Luminosity by Alicorn (to be followed by its sequel Radiance)
Animorphs: The Reckoning by Duncan Sabien
I’m also putting out a feed where I convert any articles I find interesting, a lot of which are in the Rat Sphere.
My goal with this project is to make some of my personal favorite rational stories more accessible by allowing people to enjoy them in audiobook format. I know how powerful these stories can be, and I want to help bring them to a wider audience and to make them easier for existing fans to re-experience.
I wanted to share this here on Lesswrong to connect with others who might find value in these audiobooks. If you’re a fan of any of these stories, I’d love to get your thoughts and feedback! And if you know other aspiring rationalists who might enjoy them, please help spread the word.
What other classic works of rational fiction would you love to see converted into AI audiobooks?
Is there a way to access this without a substack subscription?
Yes! There are RSS feeds for Planecrash, Luminosity, Animorphs and the article feed.
I am not sure if this has been well enough discussed elsewhere regarding Project Lawful, but it is worth reading despite some fairly high value-of-an-hour multiplied by the huge time commitment and the specifics of how it is written adds many more elements to “pros” side of the general “pros and cons” considerations of reading fiction.
It is also probably worth reading even if you’ve got a low tolerance for sexual themes—as long as that isn’t so low that you’d feel injured by having to read that sorta thing.
If you’ve ever wondered why Eliezer describes himself as a decision theorist, this is the work that I’d say will help you understand what that concept looks like in his worldview.
I read it first in the Glowfic format, and since enough time had passed since finishing it when I found the Askwho AI audiobook version, I also started listening to that.
It was taken off of one of the sites hosting for TOS, and so I’ve since been following it update to update on Spotify.
Takeaways from both formats:
Glowfic is still superior if you have the internal motivation circuits for reading books in text. The format includes reference images for the characters in different poses/expressions to follow along with the role playing. The text often includes equations, lists of numbers, or things written on whiteboards which are hard to follow in pure audio format. There are also in-line external links for references made in the work—including things like background music to play during certain scenes.
(I recommend listening to the music anytime you see a link to a song.)
This being said, Askwho’s AI audiobook is the best member of its format I’ve seen so far. If you have never listened to another AI voiced audiobook, I’d almost recommend not starting with this one, because you risk not appreciating it as much as it deserves, and simultaneously you will ruin your chances of being able to happily listen to other audiobooks done with AI. This is, of course, a joke. I do recommend listening to it even if it’s the first AI audiobook you’ll ever listen to—it deserves being given a shot, even by someone skeptical of the concept.
I think a good compromise position, with the audio version is to listen to chapters with lecture content with the glowfic in another tab, in “100 posts per page” mode, on the page containing the rough start-to-end transcript for that episode. Some of the discussion you will likely be able to follow in working memory while staring at a waiting room wall, but good luck on heavily-math stuff. If you’re driving and get to heavy-math, it’d probably also be a good idea to just have that section open on your phone so you can scroll through those parts again 10 minutes later while you’re waiting for your friend to meet you out in the parking lot.
TL;DR—IMO Project Lawful is worth reading for basically everyone, despite length and other tiny flinches from content/genre/format. Glowfic format has major benefits, but Askwho did a extraordinarily good job at making the AI-voiced format work. You should probably have the glowfic open somewhere alongside the audiobook, since some things are going to be lost if you’re trying to do it purely as an audiobook.
I gave it a try two years ago, and I rly liked the logic lectures early on (basicly a narrativization of HAE101 (for beginners)), but gave up soon after. here are some other parts I lurned valuable stuff fm:
when Keltham said “I do not aspire to be weak.”
and from an excerpt he tweeted (idk context):
”if at any point you’re calculating how to pessimize a utility function, you’re doing it wrong.”
Keltham briefly talks about the danger of (what I call) “proportional rewards”. I seem to not hv noted down where in the book I read it, but it inspired this note:
If you’re evaluated for whether you’re doing your best, you have an incentive to (subconsciously or otherwise) be weaker so you can fake doing your best with less effort. Never encourage people “you did your best!”. An objective output metric may be fairer all things considered.
and furthermore caused me to try harder to eliminate internal excusification-loops in my head. “never make excuses for myself” is my ~3rd Law—and Keltham help me be hyperaware of it.
(unrelatedly, my 1st Law is “never make decisions, only ever execute strategies” (origin).)
I already had extensive notes on this theme, originally inspired by “Stuck In The Middle With Bruce” (JF Rizzo), but Keltham made me revisit it and update my behaviour further.
re “handicap incentives”, “moralization of effort”, “excuses to lose”, “incentive to hedge your bets”
I also hv this quoted in my notes, though only to use as diversity/spice for explaining stuff I already had in there (I’ve placed it under the idionym “tilling the epistemic soil”):
Keltham > “I’m—actually running into a small stumbling block about trying to explain mentally why it’s better to give wrong answers than no answers? It feels too obvious to explain? I mean, I vaguely remember being told about experiments where, if you don’t do that, people sort of revise history inside their own heads, and aren’t aware of the processes inside themselves that would have produced the previous wrong or suboptimal answer. If you don’t make people notice they’re confused, they’ll go back and revise history and think that the way they already thought would’ve handled the questions perfectly fine.”
do u have recommendations for other sections u found especially insightfwl or high potential-to-improve-effectiveness? no need to explain, but link is appreciated so I can tk look wo reading whole thing.
(edit: formatting on this appears to have gone all to hell and idk how to fix it! Uh oh!)
(edit2: maybe fixed? I broke out my commentary into a second section instead of doing a spoiler section between each item on the list.)
(edit3: appears fixed for me)
Yep, I can do that legwork!
I’ll add some commentary, but I’ll “spoiler” it in case people don’t wanna see my takes ahead of forming their own, or just general “don’t spoil (your take on some of) the intended payoffs” stuff.
https://www.projectlawful.com/replies/1743791#reply-1743791
https://www.projectlawful.com/posts/6334 (Contains infohazards for people with certain psychologies, do not twist yourself into a weird and uncomfortable condition contemplating “Greater Reality”—notice confusion about it quickly and refocus on ideas for which you can more easily update your expectations of future experience within the universe you appear to be getting evidence about. “Sanity checks” may be important. The ability to say to yourself “this is a waste of time/effort to think about right now” may also be important.) (This is a section of Planecrash where a lot of the plot-relevant events have already taken place and are discussed, so MAJOR SPOILERS.) (This is the section that “Negative Coalition” tweet came from.)
https://www.projectlawful.com/posts/5826
https://www.projectlawful.com/replies/1778998#reply-1778998
https://www.projectlawful.com/replies/1743437#reply-1743437
https://www.projectlawful.com/replies/1786657#reply-1786657
https://www.projectlawful.com/replies/1771895#reply-1771895
“No rescuer hath the rescuer. No Lord hath the champion, no mother and no father, only nothingness above.” What is the right way to try to become good at the things Eliezer is good at? Why does naive imitation fail? There is a theme here, one which has corners that appear all over Eliezer’s work—see Final Words for another thing I’d call a corner of this idea. What is the rest? How does the whole picture fit together? Welp. I started with writing a conversation in the form of Godel Escher Bach, or A Semitechnical Introduction to Solomonoff Induction, where a version of me was having a conversation with an internal model of Eliezer I named “Exiezer”—and used that to work my way through connecting all of those ideas in an extended metaphor about learning to craft handaxes. I may do a LessWrong post including it, if I can tie it to an sufficiently high-quality object-level discussion on education and self improvement.
This is a section titled “the meeting of their minds” where Keltham and Carissa go full “secluded setting, radical honesty, total mindset dump.” I think it is one of the most densely interesting parts of the book, and I think represents a few techniques more people should try. “How do you know how smart you really are?” Well, have you ever tried writing a character smarter than you think you are doing something that requires more intelligence than you feel like you have? What would happen if you attempted that? Well, you can have all the time in the world to plan out every little detail, check over your work, list alternatives, study relevant examples/material..m etc. etc. This section has the feeling of people actually attempting at running the race they’ve been practicing for using the crispest versions of the techniques they’ve been iterating on. Additionally, “have you ever attempted to ‘meet minds’ with someone? What sort of skills would you want to single out to practice? What sort of setting seems like it’d work for that?” This section shows two people working through a really serious conflict. It’s a place where their values have come seriously into conflict, and yet, to get more of what they both want, they have to figure out how to cooperate. Also, they’ve both ended up pretty seriously damaged, and they have things they need to untangle.
This is a section called “to earth with science” and… Well, how useful it is depends on how much it’s going to be useful to you to think more critically about the academic/scientific institutions we have on this planet. It’s very much Eliezer doing a psuedo-rant about what’s broken here that echoes the tone of something like Inadequate Equilibria. The major takeaway would be something like the takeaways you get from a piece of accurate satire—the lèse majesté which shatters some of the memes handed down to you by the wiser-than-thou people who grimly say “we know it’s not perfect, but it’s the best we have” and expect you not to have follow-up questions about that type of assertion.
This is my favorite section from “to hell with science.” The entire post is a great lecture about the philosophy and practice of science, but this part in particular touches on a concept I expect to come up in more detail later regarding AIs and agency. One of the cruxes of this whole AI debate is whether you can separate out “intelligence” and “agency”—and this part provides an explanation for why that whole idea is something of a failure to conceptualize these things correctly.
This is Keltham lecturing on responsibility, the design of institutions, and how to critique systems from the lens of someone like a computer programmer. This is where you get some of the juiciest takeaways about Civilization as Eliezer envisions it. The “basic sanity check” of “who is the one person responsible for this” & requisite exception handling is particularly actionable, IMO.
“Learn when/where you can take quick steps and plant your feet on solid ground.” There’s something about feedback loops here, and the right way to start getting good at something. May not be terribly useful to a lot of people, but it stood out as a prescription for people who want to learn something. Invent a method, try to cheat, take a weird shortcut, guess. Then, check whether your results actually work. Don’t go straight for “doing things properly” if you don’t have to.
Keltham on how to arrive at Civilization from first-principles. This is one of the best lectures in the whole series from my perspective. The way it’s framed in the form of a thought-experiment that I could on-board and play with in spare moments.
Hopefully some of these are interesting and useful to you Mir, as well as others here. There’s a ton of other stuff, so I may write a follow-up with more later on if I have more time.
This is awesome, thank you so much! Green leaf indicates that you’re new (or new alias) here? Happy for LW! : )
I first learned this lesson in my youth when, after climbing to the top of a leaderboard in a puzzle game I’d invested >2k hours into, I was surpassed so hard by my nemesis that I had to reflect on what I was doing. Thing is, they didn’t just surpass me and everybody else, but instead continued to break their own records several times over.
Slightly embarrassed by having congratulated myself for my merely-best performance, I had to ask “how does one become like that?”
My problem was that I’d always just been trying to get better than the people around me, whereas their target was the inanimate structure of the problem itself. When I had broken a record, I said “finally!” and considered myself complete. But when they did the same, they said “cool!”, and then kept going. The only way to defeat them, would be by not trying to defeat them, and instead focus on fighting the perceived limits of the game itself.
To some extent, I am what I am today, because I at one point aspired to be better than Aisi.
Two years ago, I didn’t realize that 95% of my effort was aimed at answering what ultimately was other people’s questions. What happens when I learn to aim all my effort on questions purely arising from bottlenecks I notice in my own cognition?
I hate how much time my brain (still) wastes on daydreaming and coming up with sentences optimized for impressing people online. What happens if I instead can learn to align all my social-motivation-based behaviours to what someone would praise if they had all the mental & situational context I have, and who’s harder to fool than myself? Can my behaviour then be maximally aligned with [what I think is good], and [what I think is good] be maximally aligned with my best effort at figuring out what’s good?
I hope so, and that’s what Maria is currently helping me find out.
Thanks so much! Glad you are enjoying the audio format. I really agree this story is worth “reading” in some form, it’s why I’m working on this project.
Thanks for making these! How expensive is it?
It is not cheap. It’s around ~$20 per hour of audio. Luckily there are people on bord with this project who help cover cost through a Patreon
Is the recording schedule based on Patreon cash flow? I.e. if more people support, could we get episodes faster? Or is it also limited by your time? (not sure how much manual labour goes into this vs just paying for the service) Would it be possible to put money toward a specific project? This may be an interesting incentive for people who’d like to see more of their favourite story sooner:)
Added an embedded audio element for you.
Thanks, appreciate it.
Thank you!
The Planecrash audiobook is great, and I would not have read it if it were not for the audio version.
Thanks! Glad you are enjoying it.