PSA: The Sequences don’t need to be read in sequence
This week, I hung out with the LessWrong team while they talked to relatively new users. New users often had a vague intention to read Eliezer’s original Sequences, but were blocked on the size of the project. They thought the Sequences would only work in, well, sequence.
I just polled eight people (including me) who have read the Sequences whether they only work in sequence.
7 people said they work out of sequence (though 2 noted that it might be better to read a given sequence in order)
1 person said they thought it was necessary to read any given sequence in order, but it didn’t matter if you read one sequence (e.g. A Human’s Guide to Words) before or after another (e.g. Mysterious Answers to Mysterious Questions)
A typical sequence post has many links to other sequence posts. But these are mostly context and elaboration. The posts tend to work well standalone.
Here are three posts you might get started with:
Fake Explanations. The first post in the Mysterious Answers to Mysterious Questions sequence, described by Eliezer as “probably the most important core sequence in Less Wrong”.
Leave a Line of Retreat. Letting go of a belief that’s important to you is hard. Particularly if you think stuff you care about depends on it (e.g. if you think being good depends on moral realism, it could be hard to reexamine your belief in moral realism). This post describes the phenomenon, and gives advice for dealing with it.
The Hidden Complexity of Wishes. Imagine you had a device that could cause any concrete statement to become true. This post explores the difficulties you would have getting what you want with the device.
(Thanks to various people at the Lightcone offices for beta reading this post, particularly Ben Pace).
- LW Team is adjusting moderation policy by 4 Apr 2023 20:41 UTC; 304 points) (
- What’s with all the bans recently? by 4 Apr 2024 6:16 UTC; 65 points) (
- AI Safety is Dropping the Ball on Clown Attacks by 22 Oct 2023 20:09 UTC; 64 points) (
- Upgrading the AI Safety Community by 16 Dec 2023 15:34 UTC; 42 points) (
- How can I get help becoming a better rationalist? by 13 Jul 2023 13:41 UTC; 42 points) (
- AI Safety is Dropping the Ball on Clown Attacks by 21 Oct 2023 23:15 UTC; -17 points) (EA Forum;
W H A T
I C A N H A V E A M O R N I N G S E Q U E N C E I N S T E A D O F M O R N I N G N E W S
Idea: forward emails to yourself every day for a month with a sequence post each day.
Some other posts that I think are good and work well stand-alone:
Cached Thoughts
Expecting Short Inferential Distances
What Is Evidence?
The Lens That Sees Its Flaws
Illusion of Transparency: Why No One Understands You
Double Illusion of Transparency
I found that some parts of sequences felt like cliff-hangers and demanded that the next post follow, but for the most part, they could be jumped around in to wherever piqued my interest at the time.
Logistically, tracking what I had left to read of them was fiddly: I ended up putting the titles of and links to all the sequences posts (scraped from some overview page) into a checklist in my notes app, then trying to remember to tick them off as I read them. If a feature for this was built into lesswrong itself, I was unaware of it at the time.
Personally my process goes something like:
Click a citation/link on LW that sends me to a sequence post
Read the post, opening any interesting citations in new tabs
Repeat until I run out of time or run out of interesting citations (the latter never happens)
{ TV Tropes / Wikipedia } style!
https://www.readthesequences.com
https://www.readthesequences.com/#preface
I recommend the preface because it tells you [some] issues [that the author noticed] with the work. (It doesn’t mention ‘this research is no longer up to date/etc.’, but it does mention some other things:)
There’s been some comments (or at least a post I think) on what that ‘book’/he had to say about Neural Networks. Understandability has been mentioned as an issue today, and I think that’s more of a problem where it’s less clear how to evaluate ‘ability’ or ‘performance’.
ETA:
1)
Added the [bracketed text] on the line duplicated below:
‘I recommend the preface because it tells you [some] issues [that the author noticed] with the work. ’