Rationality: From AI to Zombies
Eliezer Yudkowsky’s original Sequences have been edited, reordered, and converted into an ebook!
Rationality: From AI to Zombies is now available in PDF, EPUB, and MOBI versions on intelligence.org (link). You can choose your own price to pay for it (minimum $0.00), or buy it for $4.99 from Amazon (link). The contents are:
333 essays from Eliezer’s 2006-2009 writings on Overcoming Bias and Less Wrong, including 58 posts that were not originally included in a named sequence.
5 supplemental essays from yudkowsky.net, written between 2003 and 2008.
6 new introductions by me, spaced throughout the book, plus a short preface by Eliezer.
The ebook’s release has been timed to coincide with the end of Eliezer’s other well-known introduction to rationality, Harry Potter and the Methods of Rationality. The two share many similar themes, and although Rationality: From AI to Zombies is (mostly) nonfiction, it is decidedly unconventional nonfiction, freely drifting in style from cryptic allegory to personal vignette to impassioned manifesto.
The 333 posts have been reorganized into twenty-six sequences, lettered A through Z. In order, these are titled:
A — Predictably Wrong
B — Fake Beliefs
C — Noticing Confusion
D — Mysterious Answers
E — Overly Convenient Excuses
F — Politics and Rationality
G — Against Rationalization
H — Against Doublethink
I — Seeing with Fresh Eyes
J — Death Spirals
K — Letting Go
L — The Simple Math of Evolution
M — Fragile Purposes
N — A Human’s Guide to Words
O — Lawful Truth
P — Reductionism 101
Q — Joy in the Merely Real
R — Physicalism 201
S — Quantum Physics and Many Worlds
T — Science and Rationality
U — Fake Preferences
V — Value Theory
W — Quantified Humanism
X — Yudkowsky’s Coming of Age
Y — Challenging the Difficult
Z — The Craft and the Community
Several sequences and posts have been renamed, so you’ll need to consult the ebook’s table of contents to spot all the correspondences. Four of these sequences (marked in bold) are almost completely new. They were written at the same time as Eliezer’s other Overcoming Bias posts, but were never ordered or grouped together. Some of the others (A, C, L, S, V, Y, Z) have been substantially expanded, shrunk, or rearranged, but are still based largely on old content from the Sequences.
One of the most common complaints about the old Sequences was that there was no canonical default order, especially for people who didn’t want to read the entire blog archive chronologically. Despite being called “sequences,” their structure looked more like a complicated, looping web than like a line. With Rationality: From AI to Zombies, it will still be possible to hop back and forth between different parts of the book, but this will no longer be required for basic comprehension. The contents have been reviewed for consistency and in-context continuity, so that they can genuinely be read in sequence. You can simply read the book as a book.
I have also created a community-edited Glossary for Rationality: From AI to Zombies. You’re invited to improve on the definitions and explanations there, and add new ones if you think of any while reading. When we release print versions of the ebook (as a six-volume set), a future version of the Glossary will probably be included.
- Why CFAR? The view from 2015 by 23 Dec 2015 22:46 UTC; 73 points) (
- Why CFAR’s Mission? by 2 Jan 2016 23:23 UTC; 59 points) (
- 6 Oct 2015 4:10 UTC; 7 points) 's comment on The Library of Scott Alexandria by (
- 22 Jun 2021 13:10 UTC; 5 points) 's comment on How can there be a godless moral world ? by (
- 25 Mar 2015 15:06 UTC; 2 points) 's comment on Welcome to Less Wrong! (7th thread, December 2014) by (
The cover is incorrect :(
EDIT: If you do not understand this post, read essay 268 from the book!
The code of the shepherds is terrible and stern. One sheep, one pebble, hang the consequences. They have been known to commit fifteen, and twenty-one, and even even, rather than break it.
I just bust out laughing in the office at this...and can’t share the joke with anybody.
Now I want to know if the incorrectness is intentional and if so, what message it’s supposed to carry.
It’s a bluff to make us think Yudkowsky cares about things like human happiness rather than what’s right. Don’t be fooled!
I had the same thought
There might be one more stone not visible?
10 would still be incorrect.
Darn it, and I counted like five times to make sure there really were 10 visible before I said anything. I didn’t realize that the stone the middle-top stone was on top of was one stone, not two.
I see nine stones, not ten.
Three at the back, three at the front, one to one side, one standing up… the question is whether it’s standing on one stone or two.
Perhaps this is already discussed elsewhere and I’m failing at search. I’d be amazed if the below wasn’t already pointed out.
On rereading this material it strikes me that this text is effectively inaccessible to large portions of the population. When I binged on these posts several years ago, I was just focused on the content for myself. This time, I had the thought to purchase for some others who would benefit from this material. I realized relatively quickly that the purchase of this book would likely fail to accomplish anything for these people, and may make a future attempt more difficult.
I think many of my specific concerns apply to a large percentage of the population.
The preface and introductions appear aimed at return readers. The preface is largely a description of ‘oops’, which means little to a new reader and is likely to trigger a negative halo effect in people who don’t yet know what that means. - “I don’t know what he’s talking about, and he seems to make lots of writing mistakes.”
There isn’t a ‘hook’. Talking about balls in urns in the intro seems too abstract for people. The rest of the sequences have more accessible examples, which most people would never reach.
Much of the original rhetoric is still in place. Admittedly that’s part of what I liked about the original posts, but I think it limits the audience. As a specific example, a family member is starting high school, likes science, and I think would benefit from this material. However her immediate family is very religious, to the point of ‘disowning’ a sister when they found out about an abortion ~25 years ago. The existing material uses religion as an example of ‘this is bad’ frequently enough that my family member would likely be physically isolated from the material and socially isolated from myself. 87% of America (86% global) have some level of belief in religion. The current examples are likely to trigger defensive mechanisms, before they’re education about them. (Side-note: ‘Waking Up: A Guide to Spirituality Without Religion – by Sam Harris’ is a good book, but has this same exact issue.)
Terminology is not sufficiently explained for people seeing this material with fresh eyes. As an example, ~15% of the way through ‘New Improved Lottery’ talks about probability distributions. There was no previous mention of this. Words with specific meanings, that are now often used, are unexplained. ‘Quantitative’ is used and means something to us, but not to most people. The Kindle provided dictionary and Wikipedia definitions are not very useful. This applies to the chapter titles as well, such as ‘Bayesian Judo’.
The level of hyperlinks, while useful for us, is not optimal for someone reading a subject for the first time. A new reader would have to switch topics in many cases to understand the reference.
References to LessWrong and Overcoming Bias and only make sense to us.
Eliezer and Robb have done a lot to get the material into book state… but it’s preaching to the choir.
Specifically what I think would make this more accessible:
A more immediate hook along the lines of ‘Practicing rationality will help you make more winning decisions and be less wrong.’ (IE: keep reading because this=good and doable) Eliezer was prolific enough that I think good paragraphs likely already exist; but need connectors.
Where negative examples are likely to dissuade large numbers of people, find better examples. In general avoid mentions of specific politics or religion in general. It’s better to boil the frog.
Move or remove all early references to Bayes. ‘Beliefs that are rational are call Bayesian’ means nothing to most people. Later references might as well be technobabble.
Make sure other terminology is actually explained/understandable before it’s used in the middle of an otherwise straightforward chapter. I’d try 1n & 2n-gramming the contents against Google Ngrams to identify terminology we need to make sure is actually explained/understood before casual use.
Get this closer to a 7th grade reading level. This sets a low bar at potential readers who can understand ‘blockbuster’ books in English. (This might be accomplished purely with the terminology concern/change above)
Change all hyperlinks to footnotes.
Discuss LessWrong, Overcoming Bias, Eliezer, Hanson in the preface as ‘these cool places/people where much of this comes from’ but limit the references within the content.
Is there any ongoing attempt or desire to do a group edit of this into an ‘Accessible Rationality’?
Thanks for all the comments! This is helpful. I agree ‘Biases: An Introduction’ needs to function better as a hook. The balls-in-an-urn example was chosen because it’s an example Eliezer re-uses a few times later in the Sequences, but I’d love to hear ideas for better examples, or in general a more interesting way to start the book.
‘Religion is an obvious example of a false set of doctrines’ is so thoroughly baked into the Sequences that I think getting rid of it would require creating an entirely new book. R:AZ won’t be as effective for theists, just as it won’t be as effective for people who find math, philosophy, or science aversive.
I agree with you about ‘boiling the frog’, though: it would be nice if the book eased its way into anti-religious examples. I ended up deciding it was more important to quickly reach accessible interesting examples (like the ones in ‘Fake Beliefs’) than to optimize for broad appeal to theists and agnostics. One idea I’ve been tossing around, though, is to edit Book I (‘Map and Territory’) and Book II (‘How to Actually Change Your Mind’) for future release in such a way that it’s possible to read II before I. It will still probably be better for most people to start with I, but if this works perhaps some agnostic or culturally religious readers will be able to start with II and get through more content before running into a huge number of anti-religious sentiments.
I agree about doing more to address the technobabble. In addition to including a Glossary in future editions of the book, I’ll look into turning some unnecessarily technical asides into footnotes. The hyperlinks, of course, will need to be removed regardless when the print book comes out.
I’ve had similar concerns and I agree with a lot of this.
If we really want to approach a 7th grade reading level, then we had better aim for kindergartners. I remember reading through the book trying to imagine how to bring it down several levels and thinking about just how many words I was taking for granted as a high-IQ adult who has had plenty of time to just passively soak up vocabulary and overviews of highly complex fields. I just don’t think we’re there yet; I think that’s why there are things like SPARC where we’re trying it out on highly intelligent high school students who are unusually well-educated for their age.
To my knowledge this is already a priority.
I find that there’s a wide disparity between LW users in intelligence and education, and I don’t know if I see a wiki-like approach converging on anything particularly useful. I would imagine arguments about what’s not simple enough and what’s not complex enough, and about people using examples from their pet fields that others don’t understand. It might work if you threw enough bodies at it, like Wikipedia, but we don’t have that many bodies. I don’t know how others feel.
The point wasn’t to aim for 7th graders, but a 7th grade level which would make it generally accessible to busy adults.
See Mark’s post regarding 7th grade; my intention was aimed at adults, who (for whatever reason) seem to like the 7th grade reading level.
I’m not sure how to effectively crowd source this without getting volunteers for specific (non-overlapping) tasks and sections. I share your concern with the wiki-method, unless each section has a lead. At work we regularly get 20 people to collaborate on ~100 page proposals, but the same incentives aren’t available in this case. Copyediting is time consuming and unexciting; does anyone know of similar crowd sourced efforts? I found a few but most still had paid writers.
‘Accessible Rationality’ already exists… in the form of a wildly popular Harry Potter fanfiction.
What does 1n or 2n-gramming mean? I’m looking at Google Ngrams, and it’s not obvious to me.
1 gramming is checking single words; should identify unfamiliar vocabulary. (Ex: quantifiable)
2 gramming would check pairs of words; should identify uncommon phrases made of common words (ex: probability mass—better examples probably exist)
The 1⁄2 gram terminology may be made up, but I think I’ve heard it used before.
Thanks!
What’s the payoff of changing hyperlinks to footnotes? Given all of the other, substantive, issues you raised, that seems unlikely to make any significant difference.
Two reason:
Frequently having multiple words as hyperlinks in ebooks mean that ‘turning the page’ may instead change chapters. Maybe it is just a problem with iPhone kindle.
For links that reference forward chapters, what is a new reader to do? They can ignore it and not understand the reference, or they can click, read, and then try to go back… but it’s not a very smooth reading experience.
Granted, I probably wouldn’t have noticed the second issue, if not for the first issue.
I don’t think the point of the sequences or the book is to be accessible to everyone. If you want to write ‘Accessible Rationality’ it likely makes more sense to start from stretch.
Agreed that it may not be the point, but other than what I think are fixable issues, the book contents work well. I don’t think starting from scratch would be a large enough improvement to justify the extra time and increased chance of failure.
I think the big work is in making the examples accessible, and Eliezer already did this for the -other- negative trigger.
Just a reminder that mistakes/problems/errors can be sent to errata@intelligence.org and we’ll try fix them!
I can’t mail that address, I get a failure message from Google:
I’ll post my feedback here:
Oops. Should be fixed now.
Thanks!
D’oh. It’s all good in the epub, but something broke (for very dumb reasons) converting the mobi. It’s fixed now. If you’ve already bought the book though Amazon or e-junkie, you’ll have to re-download the file to get the fixed one (in a few hours, while Amazon approves the new book). Sorry about that.
Not much we can do about this. Amazon is very restrictive in how you can modify the styling of links. It works fine for displays with color, but people with e-ink displays are out of luck. :-(
Thanks.
Same.
I think you meant “try to fix them” :)
You should send that to errata@intelligence.org.
Yay! Now I’m sending this to all of my friends!
My first reaction as well.
But that is easy. What I haven’t figured out yet is how to get them to read it.
I’ve found that the people most interested in reading it are the ones I’ve already gotten addicted to HPMOR.
I was tricked into doing this. Years ago someone posted an ebook claiming to be the Sequences, but was actually just every single Yudkowsky blog post from 2006 to 2010 -_-
It took until noticing that only Yudkowsky’s side of the FOOM debate was in there that I realized what had happened
It wasn’t meant as a trick! Organising them would have been very hard.
Can confirm!
Just as a little bit of a counterpoint, I loved the 2006-2010 ebook and was never particularly bothered by the length. I read the whole thing at least twice through, I think, and have occasionally used it to look up posts and so on. The format just worked really well for me. This may be because I am an unusually fast reader, or because I was young and had nothing else to do. But it certainly isn’t totally useless :P
Oh, I didn’t mean to imply I didn’t like it! It was a welcome companion for hundreds of long school bus journeys.
Good work guys!
This might be the excuse I need to finally go through the complete sequences as opposed to relying on cherry-picking posts whenever I encounter a reference I don’t already know.
Excellent, thank you! Any update on when the real book will be available for purchase for those of us who don’t do ebooks?
I second this question! I want to have this book in flesh, staying on my bookshelf.
Can I know to who and where the money for the book goes?
From Amazon, 30% goes to Amazon and 70% goes to MIRI.
From e-junkie (the pay-what-you-want option): 100% goes to MIRI, minus PayPal transaction fees (a few %).
Couldn’t you pay $0.00, send the money to MIRI, and avoid transaction fees?
Yeah. Main reason to do it this way is fear of trivial inconveniences.
Depending on how you sent money to MIRI, we’d incur transaction fees anyway (donating through PayPal using a PayPal account or CC). ACH donations have lower fees, and checks don’t have any, but both of those take staff time to process, so unless the donation was say $50 or more, it probably wouldn’t be worth it.
What about Bitcoin?
No fees, but also takes some extra staff time (additional bookkeeping/accounting work is involved), so there is some cost to it. If we got more BTC donations it would reduce the time cost per donation, due to effects of batching, but as it stands now, they are usually processed (record added to our donor database and accounting software) on an individual basis.
One thing that takes a significant amount of time is when someone mis-pays a Coinbase invoice (sends a different amount of BTC then they indicated on the Coinbase form on our site). Coinbase treats these payments in a different way that ends up requiring more time to process on our end.
All that being said we like having the BTC donation option, and it always makes me happy to see one come in. So if making contributions via BTC is your preference, I’m all for it :)
They use coinbase, so according to this it’s free up to $1 million.
It should be free, period. Coinbase doesn’t charge fees for registered non-for-profits.
Yup, but those are convenient distribution platforms.
Perhaps this should be noted in the main article. I was thinking about buying it through Amazon until I saw this!
I am impressed. The production quality on this is excellent, and the new introduction by Rob Bensinger is approachable for new readers. I will definitely be recommending this over the version on this site.
I paid $0 because I’d rather not pay transaction fees on a donation to charity. You can donate to MIRI directly here:
https://intelligence.org/donate/
And CFAR here:
http://rationality.org/donate/
See my comment here about this.
I used and prefer Bitcoin, which wasn’t an option for the eBook and which carries smaller fees.
The zip file has some extra Apple metadata files included. Nothing too revealing, just dropbox bits.
Congratulations, well done!
Side note: the “Glossary” link seems to be broken.
Should be working now. I accidentally made it an internal link.
For reasons, I suggest that Bayesian Judo doesn’t make EY look good to people who aren’t already cheering for his team, and maybe it wasn’t wise to include it.
More generally, the book feels a bit… neutered. Things like, for example, changing “if you go ahead and mess around with Wulky’s teenage daughter” to “if you go ahead and insult Wulky”. The first is concrete, evocative, and therefore strong, while the latter is fuzzy and weak. Though my impression may be skewed just because I remember the original examples so well.
I am thinking of recommending this to people, all of whom are unlikely to pay. Is having people acquire this for $0 who would otherwise not have read it beneficial or harmful to MIRI? (If the answer is “harmful because of paying for people to download it”, I can email it to my friends with a payment link instead of directing them to your website.)
Definitely beneficial, there is no cost worth considering when it comes to the next marginal person getting the book through our site, even if their selection is $0. So don’t worry about directing them there.
With SumatraPDF 3.0 on Windows 8.1 x64, the links in the PDF version do not show up. With Adobe Reader 11 on Windows 7 x86, they look fine. On the other hand, SumatraPDF can also handle the MOBI and EPUB versions.
I’m getting problems too. The contents pages look like this, for example.
I have been creating a tex version at: https://github.com/jrincayc/rationality-ai-zombies
I have used Lulu to print the book, instructions are at: https://github.com/jrincayc/rationality-ai-zombies Or you could print it somewhere else that allows you to print a 650 page 8.5 by 11 inch book. (If you try it with a different place, let me know) I have read through the entire printed version and fixed all the formatting issues that I found in the beta7 release in the new beta8 release.
I have relinked the footnotes. It is now reasonably editable. I’ve put up pdfs at https://github.com/jrincayc/rationality-ai-zombies/releases
There is still a lot of work to do before I consider it done, but it is more or less useable for some purposes. I printed off a copy for myself from Lulu for about $12. Here is the two column version that can be printed out as a single volume: http://jjc.freeshell.org/rationality-ai-zombies/rationality_from_ai_to_zombies_two_column_beta2.pdf
Awesome! How large is it altogether (in words)?
Approximately 600,000 words!
Which is roughly the length of War and Peace or Atlas Shrugged.
Ah, so about as large as it takes for a fanfic to be good. :P
Hi, and thanks for the awesome job! Will you keep a public record of changes you make to the book? I’m coordinating a translation effort, and that would be important to keep it in sync if you change the actual text, not just fix spelling and hyperlinking errors.
Edit: Our translation effort is for Portuguese only, and can be found at http://racionalidade.com.br/wiki .
Yes, we’ll keep a public record of content changes, or at least a private record that we’d be happy to share with people doing things like translation projects.
How is that translation coming along? I could help with German.
We’re translating to Brazilian Protuguese only, since that’s our native language.
I liked Robby’s introduction to the book overall, but I find it somewhat ironic that right after the prologue where Eliezer mentions that one of his biggest mistakes in writing the Sequences was focusing on abstract philosophical problems that are removed from people’s daily problems, the introduction begins with
The first (though not necessarily best) example of how to rewrite this in less abstract form that comes to mind would be something like “Imagine that you’re standing by the entrance of a university whose students are seven tenths female and three tenths male, and observing ten students go in...”; with the biased example being “On the other hand, suppose that you happen to be standing by the entrance of the physics department, which is mostly male even though the university in general is mostly female.”
Some unnecessary technical jargon that could have been gotten rid of also caught my eye in the first actual post: e.g. “Rational agents make decisions that maximize the probabilistic expectation of a coherent utility function” could have been rewritten to be more broadly understandable, e.g. “rational agents make decisions that are the most likely to produce the kinds of outcomes they’d like to see”.
I could spend some time making notes of these kinds of things and offering suggested rewrites for making the printed book more broadly accessible—would MIRI be interested in that, or would they prefer to keep the content as is?
Part of the idea behind the introduction is to replace an early series of posts: “Statistical Bias”, “Inductive Bias”, and Priors as Mathematical Objects. These get alluded to various times later in the sequences, and the posts ‘An Especially Elegant Evolutionary Psychology Project’, ‘Where Recursive Justification Hits Bottom’, and ‘No Universally Compelling Arguments’ all call back to the urn example. That said, I do think a more interesting example (whether or not it’s more ‘ordinary’ and everyday) would be a better note to start the book on.
Do feel free to send stylistic or substantive change ideas to errata@intelligence.org, not just spelling errors.
This came to mind for me as well. This, from Burdensome Details, popped out at me: “Moreover, they would need to add absurdities—where the absurdity is the log probability, so you can add it—rather than averaging them.” All this does for me is pattern-match to a Wikipedia article I once read about the concept of entropy in information theory; I don’t really know what it means in any precise sense or why it might be true. And the essay even seems to stand on its own without that part. I’ve come to ignore my fear of not understanding things unless I don’t understand pretty much everything I’m reading, but I think a lot of people would get scared that they didn’t know enough to read the book and just stop reading.
Come to think of it, we could collect proposed rewrites / deletions to some wiki page: this seems suitable for a communal effort. The “deletions” wouldn’t actually need to be literal deletions, they could just be moved into a footnote. E.g. in the Burdensome Details article, a footnote saying something like “technically, you can measure probabilities by logarithms and...”
I like the idea of turning a lot of these jargony asides, especially early in the book, into footnotes. We’ll be needing to make heavier use of footnotes anyway in order to explicitly direct people to other parts of the series in places where there will no longer be a clickable link. (Though we won’t do this for most clickable links, just for the especially interesting / important ones.)
You’re welcome to use a wiki page to list suggested changes, or a Google Doc; or just send a bunch of e-mails to errata@intelligence.org with ideas.
Don’t have paypal or credit card or bitcoins or similar stuff, 0 price for now, I will look into donating from my Maestro debit card or maybe a direct transfer although international transfer rates may make that not worth the while. That and cash are the only methods I use—I rarely need anything I cannot buy with them. (I use gift cards purchased in shops for steam and google play.) I am thinking about purchasing some bitcoins for € for such donations purpose, if anyone can recommend a safe and debit card (or sofort.com) compatible service?
If you set the price to $0.00 then you don’t need to give any payment information.
That’s awesome!
Both links don’t work though: you have lesswrong.com/ prefixing every correct address.
Fixed! Thanks.
Do people think there is value in making an audio book from this?
I was thinking it would be possible to do in a similar process to the HPMOR audiobook with people contributing different chapters. If there is interest in doing this and if it is permitted to be done then I will happily volunteer to coordinate the effort. If this idea does have support then given the discussion below about how the book could be improved, would it make more sense to postpone an audiobook to allow for sensible changes, or is that an unnecessary delay in search of unreachable perfection?
Yes; one is being made by Castify.
I just finished listening to the Audiobook version of Rationality: From AI to Zombie. Lots of thanks to Yudkowsky and everyone else that was involved in making this book and the audio book. I do not know who the reader of the audio book is, but thanks all the same.
I am writing this comment as my way of prizing this book. I will try to summarize what I have personalty learned from it, in the hope that someone who was involved, will read this post and fell some pride in having helped me in my self improvement. But I am also writing this comment because I just want to express my thoughts after finishing the book.
I have not have any major change of mind, but I have several minor ones, which might very well continue to grow.
Listening to Yudkowsky’s words have made me more confident, because he is saying many things that I already intuitively knew, but I could not properly explain it my self, and could therefore not be sure I was right. I am still not 100% certain I am right, but I am more confident, and I believe that this is a good thing. Smart people should be confident. No, this is not hind site bias, because:
I did not allays instantly agree, so I do know the difference.
I been actively introspecting since I was 12, so I know most of my brains tricks.
I never set out to be a rationalist. I don’t even remember having a pre-LessWrong concept for the word “rationalist”. There where just, correct thinking and in-correct thinking, and obviously correct thinking is the way that systematically leads you to the truth, because how else would you measure correctness. Maybe this saved me from falling in to some of the rationalist tropes that Yudkovsky warns about. Or maybe I avoided them because I have read to little science fiction. Or maybe it was because I looked at these types of tropes and saw an author who kinged to the, obviously wrong, but warm and fussy, idea that every human has the same number of skill points.
I wonder who setts out to be rational, with out having something specific they need rationality for. Maybe the same kind of people that identifies as an atheist? I am an atheist, but I don’t identify as such, because in my country, this is mostly a non issue.
I found LessWrong because my new boyfriend encouraged me to read here, and I actually got through the book, because I like audiobooks.
The pre-LessWrong me was a truth seeker, and as such, I though a lot about the way as applied to truth seeking. I had a crisis of faith, a several years a go, questioning the validity of science. But never really though about applying systematic reasoning to decision under uncertainty. When, in my past, I was confronted with a decision, which I did not know how to reason out, I used to deliberately hand over the decision to my feelings. Because, I reasoned, if I don’t know what is right anyway, I might as well save me the fight of going against my impulses. I hope that I can use what I have learned here to do better.
An other thing I have realized is that I am such a pushover for perceived social norms. I have notice a significant mental shift in my brain, just from having some one in my ear, who casually mentions many words and cryonics, as if these where the most normal things in the world. Intellectually I was already convinced, I already knew the right answer before listening to the book, but I still needed the extra nagging, to get all of may brain on board with it. I think that this has been the single most important insight I got from the book.
One reason I have not tried to develop the art of rational decision making before, is that I knew that I was not strong enough to counter my emotional preferences. But I was wrong. I now have one, systematically applicable self hack, and probably there are more out there to find. I have hope to be able to take charge of my motivation, and I have reasons to fight for control.
Current me is an aspiring effective altruist. I do not strive to be a perfect altruist, because I do have some selfish preferences, that I do not expect to go away. But I am going to get my ass out of the comfortable bubble of I can″t do anything anyway, and do something. Though I have not decided yet if I am going to take the path of earn to give, or if I should get directly involved in some project my self. I am looking in to both ways.
Finlay, here is one my favorite quotes from the book:
I’m leaving this comment so that I can find my way back here in the future.
Mind if you can write a follow-up review about how you joined the rationalist/EA community? Interested to see how your journey progressed 🙂
I got into AI Safety. My interest in AI Safety lured me to a CFAR workshop, since it was a joint event with MIRI. I came for the Agent Foundations research, but the CFAR turned out just as valuable. It helped me start to integrate my intuitions with my reasoning, though IDC and other methods. I’m still in AI Safety, mostly organising, but also doing some thinking, and still learning.
My resume lists all the major things I’ve been doing. Not the most interesting format, but I’m probably not going to write anything better anytime soon.
Resume—Linda Linsefors—Google Docs
Is a printed six-volume set still being worked on?
There are printed versions of book 2, that are given out sometimes at CFAR.
How to actually change your mind (book 2) is definitely a great section of Rationality: From AI to Zombies.
Not that I know of.
Does the book (especially the printed version) have training problems after sections? (I don’t have it, sorry if the question is redundant).
It does not.
Maybe it should, for people who won’t discuss things online for some reason.
Might be worth including the Amazon.co.uk and other store links.
A friend of mine is interested in reading this book, but would prefer a printed copy. Is there any chance that this book will be published any time soon?
I have used the two column version: https://github.com/jrincayc/rationality-ai-zombies/releases/download/beta3/rationality_from_ai_to_zombies_2c.pdf with https://www.lulu.com/ to make a printed version for myself. (Update: beta3 has quite a few problems that have been fixed in newer versions, so grab a new release if you are printing it: https://github.com/jrincayc/rationality-ai-zombies/releases )
Note that there are problems with the that pdf, so it isn’t perfect, but it might work. The regular PDF is too long to print as a single book.
Is there anything on procrastination? I’m tempted to buy this bookinstead cause the dude has an alright podcast too. I don’t listen to it anymore cause it’s boring and not consistently novel information but yeah.
When I feel like this I don’t want to read chapters that are complex sounding like Rationality and Politics and Death Spirals that without having read the sequences, don’t mean shit to me and could equally appear in some random Trotskyist propoganda from the weird organisation down the road.
When are these pop-rationality books gonna be replaced by a new generation of books on say Bonferri corrections for everyday life or a conceptual introduction to regression?
you’re reaching neither the unitiated nor furthering the knowledge of the adepts. You’re just preaching to the choir and making some coin from it! Defend your honour!
edit 1: fixed links
Sorry for my problem.I tried 15 times downloading,only once started and stopped at 1.5M/30.6M. Others can’t even get to track. I wish to use another source ,or some kind friends could send the pdf to 513493106@qq.com?DEEPLY BOW for your help!