A bunch of helpful comments on this thread already, I’ll collect my all my thoughts so far here, in no particular order:
I know people have done/thought about writing such a thing in the past, but I approached this sequence in a completely different way. It was my New Year’s Resolution to write productively every day for a year, as this seemed like the single most easily achievable and powerful mind level up I could attain. I made a bunch of progress in January but felt my thoughts going all over the place. Hammertime was the product of me sitting down for half an hour and figuring out a simple outline for writing with some system and regularity. Because I was already writing every day, the actual writing part was completely cost-free to me, whereas it sounds like possibly the major roadblocks in previous attempts.
My reasons for writing this sequence were, in clear order of importance: (a) to practice writing, (b) to review CFAR techniques for my own benefit, (c) to entertain, and (d) to teach instrumental rationality. With regards to (c), I have a belief that once you have decided on a thought to write about, the primary focus of the writing process is to make it as entertaining to read as possible, and this should come before making it useful.
The main issue was something along the lines of ‘who am I writing this for?’ Anyone reading the sequence would most likely already be famailiar with the material (a la someone on LW) and thus not really need the tools and advice (on account of the information being fairly wide-spread and easily accessable in the rationalsphere). On the other hand anyone who could really use the information is probably not going to be the kind of person reading this material in the first place.
I strongly disagree with this statement. I’ve been reading LW for about five years and “knew” about most of these tools abstractly without ever getting anything practical out of it. You can check the comments to Hammers and Nails and extrapolate that even longtime LWers have each only practiced only 5% of all the techniques we have, and that 5% varies wildly from person to person. It’s not clear if what I’ve written so far actually helps in this direction, but I think a properly written sequence will actually inject readers with the moral fire to do the thing that they’ve known about for years.
This is part of the reason (the other reason being laziness) that I’ve eschewed science and citations—I think most LWers can produce on demand the psychological science to back up most of my techniques and claims, and adding them is low value. Some of this work has been filled in in the comments.
On the uncanny valley, I think what you’re referring to is something different and benign: unavoidable opportunity cost, not so much practicing rationality causally making you worse at life. To me the uncanny valley refers to about three phenomena (that have all happened to me):
People becoming hyperaware of status games and deciding to play them consciously in a calculated, off-putting way. Anecdotally this seems to be a relatively common failure mode for students who go through SPARC. This is also my read on a good deal of unproductive LW conversation. Hint: high status isn’t always better!
People learning about biases and deciding to solve them by pasting a post-processing fix on top(e.g. double all time estimates to fix planning fallacy), or making all decisions by System 2 because they no longer trust System 1. This is sort of classic “Straw Vulcan.” One of the big failures of the Sequences, imo, is not detailing the important and serious reasons that many biases can actually be quite locally useful, even if they cause problems in areas they’re not designed for. We need more Chesterton’s Fence.
Rationalists beginning to see non-rationalists as normies, NPCs, or otherwise sub-human. It’s not uncommon to see conversations and word choice on LW like: using “aliveness” or “more human” as largely synonymous to being better at rationality, saying in perfect seriousness “it’s impossible to talk to normies because they have no concept of objective truth,” or putting oneself in the tiny set of unlikely heros who are “actually trying.”
After thinking about your reply for a while, you’ve made me update strongly towards believing that I had overestimated my own efficacy. In particular:
I’ve been reading LW for about five years and “knew” about most of these tools abstractly without ever getting anything practical out of it. You can check the comments to Hammers and Nailsand extrapolate that even longtime LWers have each only practiced only 5% of all the techniques we have, and that 5% varies wildly from person to person. It’s not clear if what I’ve written so far actually helps in this direction, but I think a properly written sequence will actually inject readers with the moral fire to do the thing that they’ve known about for years.
This struck me as I hadn’t considered that I was missing so many tools. I feel like I’ve had my life improved a lot by rationality. I can cite many many cases where things get resolved specifically because I have the proper information and training. Yet even with everything I currently do, I now realize that I don’t impliment the vast majority of useful things we as a community have managed to come up with. If one of your primary goals with this series it to imbue people with some of that “moral fire”, then consider it a success, at least regarding my personal experience. One thing to note is that while the series itself sets the context for this sort of thing to happen. It was only the meta-level conversation, and that specific information about inadequacy that helped me viscerally update. That might be series-relevant information.
Anyway, thanks for helping me fix my models and pointing out a glaring blind-spot.
At the beginning of this project I said to myself that I would be happy if it moved one other person in a signficant way. Right now, I’m very happy =).
I’ll try to inject more of this moral fire into the coming cycles. One thing that I’ve come to understand is that people really don’t aim high enough. There’s a mindset where you try to self-improve until you reach a satisfactory level, something like 90th percentile among the peer group, and relax there. There’s an alternative mindset where you believe that the better you are, the quicker you will improve by learning new tools, since each of them is a force multiplier. If you follow this mindset far enough, it’s almost dizzying.
A bunch of helpful comments on this thread already, I’ll collect my all my thoughts so far here, in no particular order:
I know people have done/thought about writing such a thing in the past, but I approached this sequence in a completely different way. It was my New Year’s Resolution to write productively every day for a year, as this seemed like the single most easily achievable and powerful mind level up I could attain. I made a bunch of progress in January but felt my thoughts going all over the place. Hammertime was the product of me sitting down for half an hour and figuring out a simple outline for writing with some system and regularity. Because I was already writing every day, the actual writing part was completely cost-free to me, whereas it sounds like possibly the major roadblocks in previous attempts.
My reasons for writing this sequence were, in clear order of importance: (a) to practice writing, (b) to review CFAR techniques for my own benefit, (c) to entertain, and (d) to teach instrumental rationality. With regards to (c), I have a belief that once you have decided on a thought to write about, the primary focus of the writing process is to make it as entertaining to read as possible, and this should come before making it useful.
I strongly disagree with this statement. I’ve been reading LW for about five years and “knew” about most of these tools abstractly without ever getting anything practical out of it. You can check the comments to Hammers and Nails and extrapolate that even longtime LWers have each only practiced only 5% of all the techniques we have, and that 5% varies wildly from person to person. It’s not clear if what I’ve written so far actually helps in this direction, but I think a properly written sequence will actually inject readers with the moral fire to do the thing that they’ve known about for years.
This is part of the reason (the other reason being laziness) that I’ve eschewed science and citations—I think most LWers can produce on demand the psychological science to back up most of my techniques and claims, and adding them is low value. Some of this work has been filled in in the comments.
On the uncanny valley, I think what you’re referring to is something different and benign: unavoidable opportunity cost, not so much practicing rationality causally making you worse at life. To me the uncanny valley refers to about three phenomena (that have all happened to me):
People becoming hyperaware of status games and deciding to play them consciously in a calculated, off-putting way. Anecdotally this seems to be a relatively common failure mode for students who go through SPARC. This is also my read on a good deal of unproductive LW conversation. Hint: high status isn’t always better!
People learning about biases and deciding to solve them by pasting a post-processing fix on top(e.g. double all time estimates to fix planning fallacy), or making all decisions by System 2 because they no longer trust System 1. This is sort of classic “Straw Vulcan.” One of the big failures of the Sequences, imo, is not detailing the important and serious reasons that many biases can actually be quite locally useful, even if they cause problems in areas they’re not designed for. We need more Chesterton’s Fence.
Rationalists beginning to see non-rationalists as normies, NPCs, or otherwise sub-human. It’s not uncommon to see conversations and word choice on LW like: using “aliveness” or “more human” as largely synonymous to being better at rationality, saying in perfect seriousness “it’s impossible to talk to normies because they have no concept of objective truth,” or putting oneself in the tiny set of unlikely heros who are “actually trying.”
After thinking about your reply for a while, you’ve made me update strongly towards believing that I had overestimated my own efficacy. In particular:
This struck me as I hadn’t considered that I was missing so many tools. I feel like I’ve had my life improved a lot by rationality. I can cite many many cases where things get resolved specifically because I have the proper information and training. Yet even with everything I currently do, I now realize that I don’t impliment the vast majority of useful things we as a community have managed to come up with. If one of your primary goals with this series it to imbue people with some of that “moral fire”, then consider it a success, at least regarding my personal experience. One thing to note is that while the series itself sets the context for this sort of thing to happen. It was only the meta-level conversation, and that specific information about inadequacy that helped me viscerally update. That might be series-relevant information.
Anyway, thanks for helping me fix my models and pointing out a glaring blind-spot.
At the beginning of this project I said to myself that I would be happy if it moved one other person in a signficant way. Right now, I’m very happy =).
I’ll try to inject more of this moral fire into the coming cycles. One thing that I’ve come to understand is that people really don’t aim high enough. There’s a mindset where you try to self-improve until you reach a satisfactory level, something like 90th percentile among the peer group, and relax there. There’s an alternative mindset where you believe that the better you are, the quicker you will improve by learning new tools, since each of them is a force multiplier. If you follow this mindset far enough, it’s almost dizzying.