Adam Zerner
My instinct is that it’s not the type of thing to hack at with workarounds without buy in from the LW team.
If there was buy in from them I expect that it wouldn’t be much effort to add some sort of functionality. At least not for a version one; iterating on it could definitely take time, but you could hold off on spending that time iterating if there isn’t enough interest, so the initial investment wouldn’t be high effort.
I think this is a great idea, at least in the distillation aspect.
Thanks!
Having briefer statements of the most important posts would be very useful in growing the rationalist community.
I think you’re right, but I think it’s also important to think about dilution. Making things lower-effort and more appealing to the masses brings down the walls of the garden, which “dilutes” things inside the garden.
But I’m just saying that this is a consideration. And there are lots of considerations. I feel confused about how to enumerate through them, weigh them, and figure out which way the arrow points: towards being more appealing to the masses or less appealing. I know I probably indicated that I lean towards the former when I talked about “summaries, analyses and distillations” in my OP, but I want to clarify that I feel very uncertain and if anything probably lean towards the latter.
But even if we did want to focus on having taller walls, I think the “more is possible” point that I was ultimately trying to gesture at in my OP still stands. It’s just that the “more” part might mean things like coming up with things like higher quality explanations, more and better examples of what the post is describing, knowledge checks, and exercises.
Since we don’t currently have that list of distilled posts (AFAIK—anyone?)
There is the Sequence Highlights which has an estimated reading time of eight hours.
Sometimes when I’m reading old blog posts on LessWrong, like old Sequence posts, I have something that I want to write up as a comment, and I’m never sure where to write that comment.
I could write it on the original post, but if I do that it’s unlikely to be seen and to generate conversation. Alternatively, I could write it on my Shortform or on the Open Thread. That would get a reasonable amount of visibility, but… I dunno… something feels defect-y and uncooperative about that for some reason.
I guess what’s driving that feeling is probably the thought that in a perfect world conversations about posts would happen in the comments section of the post, and by posting elsewhere I’m contributing to the problem.
But now that I write that out I’m feeling like that’s a bit silly thought. Fixing the problem would take a larger concentration of force than just me posting a few comments on old Sequence posts once in a while. By me posting my comments in the comments sections of the corresponding post, I’m not really moving the needle. So I don’t think I endorse any feelings of guilt here.
I would like to see people write high-effort summaries, analyses and distillations of the posts in The Sequences.
When Eliezer wrote the original posts, he was writing one blog post a day for two years. Surely you could do a better job presenting the content that he produced in one day if you, say, took four months applying principles of pedagogy and iterating on it as a side project. I get the sense that more is possible.
This seems like a particularly good project for people who want to write but don’t know what to write about. I’ve talked with a variety of people who are in that boat.
One issue with such distillation posts is discoverability. Maybe you write the post, it receives some upvotes, some people see it, and then it disappears into the ether. Ideally when someone in the future goes to read the corresponding sequence post they would be aware that your distillation post is available as a sort of sister content to the original content. LessWrong does have the “Mentioned in” section at the bottom of posts, but that doesn’t feel like it is sufficient.
I recently started going through some of Rationality from AI to Zombies again. A big reason why is the fact that there are audio recordings of the posts. It’s easy to listen to a post or two as I walk my dog, or a handful of posts instead of some random hour-long podcast that I would otherwise listen to.
I originally read (most of) The Sequences maybe 13 or 14 years ago when I was in college. At various times since then I’ve made somewhat deliberate efforts to revisit them. Other times I’ve re-read random posts as opposed to larger collections of posts. Anyway, the point I want to make is that it’s been a while.
I’ve been a little surprised in my feelings as I re-read them. Some of them feel notably less good than what I remember. Others blow my mind and are incredible.
The Mysterious Answers sequence is one that I felt disappointed by. I felt like the posts weren’t very clear and that there wasn’t much substance. I think the main overarching point of the sequence is that an explanation can’t say that all outcomes are equally probable. It has to say that some outcomes are more probable than others. But that just seems kinda obvious.
I think it’s quite plausible that there are “good” reasons why I felt disappointed as I re-read this and other sequences. Maybe there are important things that are going over my head. Or maybe I actually understand things too well now after hanging around this community for so long.
One post that hit me kinda hard that I really enjoyed after re-reading it was Rationality and the English Language, and then the follow up post, Human Evil and Muddled Thinking. The posts helped me grok how powerful language can be.
If you really want an artist’s perspective on rationality, then read Orwell; he is mandatory reading for rationalists as well as authors. Orwell was not a scientist, but a writer; his tools were not numbers, but words; his adversary was not Nature, but human evil. If you wish to imprison people for years without trial, you must think of some other way to say it than “I’m going to imprison Mr. Jennings for years without trial.” You must muddy the listener’s thinking, prevent clear images from outraging conscience. You say, “Unreliable elements were subjected to an alternative justice process.”
I’m pretty sure that I read those posts before, along with a bunch of related posts and stuff, but for whatever reason the re-read still meaningfully improved my understand the concept.
I assume you mean wearing a helmet while being in a car to reduce the risk of car related injuries and deaths. I actually looked into this and from what I remember, helmets do more harm than good. They have the benefit of protecting you from hitting your head against something but the issue with accidents comes much moreso from the whiplash, and by adding more weight to (the top of) your head, helmets have the cost of making whiplash worse, and this cost outweighs the benefits by a fair amount.
Yes! I’ve always been a huge believer in this idea that the ease of eating a food is important and underrated. Very underrated.
I’m reminded of this clip of Anthony Bourdain talking about burgers and how people often put slices of bacon on a burger, but that in doing so it makes the burger difficult to eat. Presumably because when you go to take a bite you the whole slice of bacon often ends up sliding off the burger.
Am I making this more enjoyable by adding bacon? Maybe. How should that bacon be introduced into the question? It’s an engineer and structural problem as much as it is a flavor experience. You really have to consider all of those things. One of the greatest sins in “burgerdom” I think is making a burger that’s just difficult to eat.
I’ve noticed that there’s a pretty big difference in the discussion that follows from me showing someone a draft of a post and asking for comments and the discussion in the comments section after I publish a post. The former is richer and more enjoyable whereas the latter doesn’t usually result in much back and forth. And I get the sense that this is true for other authors as well.
I guess one important thing might be that with drafts, you’re talking to people who you know. But I actually don’t suspect that this plays much of a role, at least on LessWrong. As an anecdote, I’ve had some incredible conversations with the guy who reviews drafts of posts on LessWrong for free and I had never talked to him previously.
I wonder what it is about drafts. I wonder if it can or should be incorporated into regular posts.
Thanks Marvin! I’m glad to hear that you enjoyed the post and that it was helpful.
Imho your post should be linked to all definitions of the sunk cost fallacy.
I actually think the issue was more akin to the planning fallacy. Like when I’d think to myself “another two months to build this feature and then things will be good”, it wasn’t so much that I was compelled because of the time I had sunk into the journey, it was more that I genuinely anticipated that the results would be better than they actually were.
It isn’t active, sorry. See the update at the top of the post.
See also: https://www.painscience.com/articles/strength-training-frequency.php.
Summary:
Strength training is not only more beneficial for general fitness than most people realize, it isn’t even necessary to spend hours at the gym every week to get those benefits. Almost any amount of it is much better than nothing. While more effort will produce better results, the returns diminish rapidly. Just one or two half hour sessions per week can get most of the results that you’d get from two to three times that much of an investment (and that’s a deliberately conservative estimate). This is broadly true of any form of exercise, but especially so with strength training. In a world where virtually everything in health and fitness is controversial, this is actually fairly settled science.
Oh I see, that makes sense. In retrospect that is a little obvious that you don’t have to choose one or the other :)
So does the choice of which type of fiber to take boil down to the question of the importance of constipation vs microbiome and cholesterol? It’s seeming to me like if the former is more important you should take soluble non-fermentable fiber, if the latter is more important you should take soluble fermentable fiber (or eat it in a whole food), and that insoluble fiber is never/rarely the best option.
Funny. I have a Dropbox folder where I store video tours of all the apartments I’ve ever lived in. Like, I spend a minute or two walking around the apartment and taking a video with my phone.
I’m not sure why, exactly. Partly because it’s fun to look back. Partly because I don’t want to “lose” something that’s been with me for so long.
I suspect that such video tours are more appropriate for a large majority of people. 10 hours and $200-$500 sounds like a lot. And you could always convert the video tour into digital art some time in the future if you find the nostalgia is really hitting you.
Hm. I hear ya. Good point. I’m not sure whether I agree or disagree.
I’m trying to think of an analogy and came up with the following. Imagine you go to McDonalds with some friends and someone comments that their burger would be better if they used prime ribeye for their ground beef.
I guess it’s technically true, but something also feels off about it to me that I’m having trouble putting my finger on. Maybe it’s that it feels like a moot point to discuss things that would make something better that are also impractical to implement.
I just looked up Gish gallops on Wikipedia. Here’s the first paragraph:
The Gish gallop (/ˈɡɪʃ ˈɡæləp/) is a rhetorical technique in which a person in a debate attempts to overwhelm an opponent by abandoning formal debating principles, providing an excessive number of arguments with no regard for the accuracy or strength of those arguments and that are impossible to address adequately in the time allotted to the opponent. Gish galloping prioritizes the quantity of the galloper’s arguments at the expense of their quality.
I disagree that focusing on the central point is a recipe for Gish gallops and that it leads to Schrodinger’s importance.
Well, I think that it in combination with a bunch of other poor epistemic norms it might be a recipe for those things, but a) not by itself and b) I think the norms would have to be pretty poor. Like, I don’t expect that you need 10⁄10 level epistemic norms in the presence of focusing on the central point to shield from those failure modes, I think you just need something more like 3⁄10 level epistemic norms. Here on LessWrong I think our epistemic norms are strong enough where focusing on the central point doesn’t put us at risk of things like Gish gallops and Schrodinger’s importance.
I actually disagree with this. I haven’t thought too hard about it and might just not be seeing it, but on first thought I am not really seeing how such evidence would make the post “much stronger”.
To elaborate, I like to use Paul Graham’s Disagreement Hierarchy as a lens to look through for the question of how strong a post is. In particular, I like to focus pretty hard on the central point (DH6) rather than supporting and tangential points. I think the central point plays a very large role in determining how strong a post is.
Here, my interpretation of the central point(s) is something like this:
Poverty is largely determined by the weakest link in the chain.
Anoxan is a helpful example to illustrate this.
It’s not too clear what drives poverty today, and so it’s not too clear that UBI would meaningfully reduce poverty.
I thought the post did a nice job of making those central points. Sure, something like a survey of the research in positive psychology could provide more support for point #1, for example, but I dunno, I found the sort of intuitive argument for point #1 to be pretty strong, I’m pretty persuaded by it, and so I don’t think I’d update too hard in response to the survey of positive psychology research.
Another thing I think about when asking myself how strong I think a post is is how “far along” it is. Is it an off the cuff conversation starter? An informal write up of something that’s been moderately refined? A formal write up of something that has been significantly refined?
I think this post was somewhere towards the beginning of the spectrum (note: it was originally a tweet, not a LessWrong post). So then, for things like citations supporting empirical claims, I don’t think it’s reasonable to expect very much from the author, and so I lean away from viewing the lack of citations as something that (meaningfully) weakens the post.
What would it be like for people to not be poor?
I reply: You wouldn’t see people working 60-hour weeks, at jobs where they have to smile and bear it when their bosses abuse them.
I appreciate the concrete, illustrative examples used in this discussion, but I also want to recognize that they are only the beginnings of a “real” answer to the question of what it would be like to not be poor.
In other words, in an attempt to describe what he sees as poverty, I think Eliezer has taken the strategy of pointing to a few points in Thingspace and saying “here are some points; the stuff over here around these points is roughly what I’m trying to gesture at”. He hasn’t taken too much of a stab at drawing the boundaries. I’d like to take a small stab at drawing some boundaries.
It seems to me that poverty is about QALYs. Let’s wave our hands a bit and say that QALYs are a function of 1) the “cards you’re dealt” and 2) how you “play your hand”. With that, I think that we can think about poverty as happening when someone is dealt cards that make it “difficult” for them to have “enough” QALYs.
This happens in our world when you have to spend 40 hours a week smiling and bearing it. It happens in Anoxan when you take shallow breaths to conserve oxygen for your kids. And it happened to hunter-gatherers in times of scarcity.
There are many circumstances that can make it difficult to live a happy life. And as Eliezer calls out, it is quite possible for one “bad apple circumstance”, like an Anoxan resident not having enough oxygen, to spoil the bunch. For you to enjoy abundance in a lot of areas but scarcity in one/few other areas, and for the scarcity to be enough to drive poverty despite the abundance. I suppose then that poverty is driven in large part by the strength of the “weakest link”.
Note that I don’t think this dynamic needs to be very conscious on anyone’s part. I think that humans instinctively execute good game theory because evolution selected for it, even if the human executing just feels a wordless pull to that kind of behavior.
Yup, exactly. It makes me think back to The Moral Animal by Robert Wright. It’s been a while since I read it so take what follows with a grain of salt, because I could be butchering some stuff, but that book makes the argument that this sort of thing goes beyond friendship and into all types of emotions and moral feelings.
Like if you’re at the grocery store and someone just cuts you in line for no reason, one way of looking at it is that the cost to you is negligible—you just need to wait an additional 45 seconds for them to check out—and so the rational thing would be to just let it happen. You could confront them, but what exactly would you have to gain? Suppose you are traveling and will never see any of the people in the area ever again.
But we have evolved such that this situation would evoke some strong emotions regarding unfairness, and these emotions would often drive you to confront the person who cut you in line. I forget if this stuff is more at the individual level or the cultural level.
When I was a student at Fullstack Academy, a coding bootcamp, they had us all do this (mapping it to the control key), along with a few other changes to such settings like making the key repeat rate faster. I think I got this script from them.