Rationality: Abridged
This was originally planned for release around Christmas, but our old friend Mr. Planning Fallacy said no. The best time to plant an oak tree is twenty years ago; the second-best time is today.
I present to you: Rationality Abridged—a 120-page nearly 50,000-word summary of “Rationality: From AI to Zombies”. Yes, it’s almost a short book. But it is also true that it’s less than 1/10th the length of the original. That should give you some perspective on how massively long R:AZ actually is.
As I note in the Preface, part of what motivated me to write this was the fact that the existing summaries out there (like the ones on the LW Wiki, or the Whirlwind Tour) are too short, are incomplete (e.g. not summarizing the “interludes”), and lack illustrations or a glossary. As such, they are mainly useful for those who have already read the articles to quickly glance at what it was about so they could refresh their memory. My aim was to serve that same purpose, while also being somewhat more detailed/extensive and including more examples from the articles in the summaries, so that they could also be used by newcomers to the rationality community to understand the key points. Thus, it is essentially a heavily abridged version of R:AZ.
Here is the link to the document. It is a PDF file (size 2.80MB), although if someone wants to convert it to .epub or .mobi format and share it here, you’re welcome to.
There is also a text copy at my brand new blog: perpetualcanon.blogspot.com/p/rationality.html
I hope you enjoy it.
(By the way, this is my first post. I’ve been lurking around for a while.)
- Rationality Feed: Last Month’s Best Posts by 12 Feb 2018 13:18 UTC; 23 points) (
- 10 Aug 2018 17:40 UTC; 6 points) 's comment on Open Thread August 2018 by (
- Summary of the sequences / Lesson plans for rationality by 5 Nov 2021 17:22 UTC; 5 points) (
- 8 Jan 2018 18:36 UTC; 4 points) 's comment on Rationality: Abridged by (
Nice!
I haven’t looked at it in detail yet, but it seems like this should also be available as a sequence on the new LessWrong (we are still finalizing the sequences features, but you can see a bunch of example in The Library).
We could just import the HTML from your website without much hassle and import them as a series of LW posts.
I have converted Rationality Abridged to EPUB and MOBI formats. The code to accomplish this is stored in this repository.
Thanks a lot for doing this!
My pleasure!
The epub version doesn’t work. The file contain errors.
Thanks for letting me know. I use [Calibre](https://calibre-ebook.com/about) to test the files, and it opens the file without complaint. What are you using (and on what platform) to read it?
iBooks and Marvin apps (iOS).
Thank you! I don’t have a good way to test Apple products (so the fix won’t be quick), but I’ll look into it.
Massive props. (For your first post, no less?)
I see some things I think could be tweaked a bit—mostly if the form of breaking paragraphs down into somewhat more digestible chunks (each summary feels slightly wall-of-text-y to me). However, overall my main takeaway is that this is great. :)
Thanks for the kind words :) I agree with what you’re saying about the ‘wall-of-text-iness’, especially on the web version; so I’m going to add some white space.
Yeah, seriously!!! You’ve got my vote for the First Post Of The Year Award.
This is completely awesome, thanks for doing this. This is something I can imagine actually sending to semi-interested friends.
Direct messaging seems to be wonky at the moment, so I’ll put a suggested correction here: for 2.4, Aumann’s Agreement Theorem does not show that if two people disagree, at least one of them is doing something wrong. From wikipedia: ” if two people are genuine Bayesian rationalists with common priors, and if they each have common knowledge of their individual posterior probabilities, then their posteriors must be equal. ” This could fail at multiple steps, off the top of my head:
The humans might not be (mathematically pure) Bayesian rationalists (and in fact they’re not.)
The humans might not have common priors (even if they satisfied 1.)
The humans might not have common knowledge of their posterior probabilities; a human saying words is a signal, not direct knowledge, so them telling you their posterior probabilities may not do the trick (and they might not know them.)
You could say failing to satisfy 1-3 means that at least one of them is “doing something wrong”, but I think it’s a misleading stretch—failing to be normatively matched up to an arbitrary unobtainable mathematical structure is not what we usually call wrong. It stuck out to me as something that would put off readers with a bullshit detector, so I think it’d be worth fixing.
Thanks for the feedback.
Here’s the quote from the original article:
One could discuss whether Eliezer was right to appeal to AAT in a conversation like this, given that neither he nor his conversational partner are perfect Bayesians. I don’t think it’s entirely unfair to say that humans are flawed to the extent that we fail to live up to the ideal Bayesian standard (even if such a standard is unobtainable), so it’s not clear to me why it would be misleading to say that if two people have common knowledge of a disagreement, at least one (or both) of them are “doing something wrong”.
Nonetheless, I agree that it would be an improvement to at least be more clear about what Aumann’s Agreement Theorem actually says. So I will amend that part of the text.
Yeah; it’s not open/shut. I guess I’d say in the current phrasing, the “but Aumann’s Agreement Theorem shows that if two people disagree, at least one is doing something wrong.” is suggesting implications but not actually saying anything interesting—at least one of them is doing something wrong by this standard whether or not they agree. I think adding some more context to make people less suspicious they’re getting Eulered (http://slatestarcodex.com/2014/08/10/getting-eulered/) would be good.
I think this flaw is basically in the original article as well, though, so it’s also a struggle between accurately representing the source and adding editorial correction.
Nitpicks aside, want to say again that this is really great; thank you!
A worthy project! Very nice.
It seems like this could benefit from webification, a la https://www.readthesequences.com (including hyperlinking of glossary terms, navigation between sections, perhaps linking to the full versions, etc.—all the amenities of web-based hypertext). If this idea interests you, let me know.
Just discovered this through the archive feature, this is awesome!
I think it should be linked in more places, it’s a really useful resource.
Two Years late but, thank you for making this!
Could someone convert it to epub, please?
Nice work, Quaerendo!
I’m on it!
Done!
Ideal format for beginning rationalists, thank you so much for that. I am reading it every day, going to full articles when wanting some more depth. It’s also helped me “recruit” new rationalists among my friends. I think that this work may have wide and long-lasting effects.
It would be extra-nice, and I don’t have the skills to do it myself, to have the links go to this LW − 2.0. Maybe you have reasons against it that I haven’t considered?
Thanks, I’m glad you found it useful!
The reason I didn’t link to LW 2.0 is because it’s still officially in beta, and I assumed that the URL (lesserwrong.com)will eventually change back to lesswrong.com (but perhaps I’m mistaken about this; I’m not entirely sure what the plan is). Besides, the old LW site links to LW 2.0 on the frontpage.
Entirely irrelevantly, given your blog’s domain name I take it the missing half of your username is “invenietis”? :-)
Indeed ; )
I just finished reading it. I find it a very useful summary and that is a hard thing to do, I know, and takes a lot of work. Thank you.
I noticed a typo
“The exact same gamble, framed differently, causes circular preferences.
People prefer certainty, and they refuse to trade off scared values (e.g. life) for unsacred ones.
But our moral preferences shouldn’t be circular.”
scared ⇒ sacred
Thanks for pointing it out. I’ve fixed it and updated the link.