My gut reaction told me that I hadn’t gone through the experience of the uncanny valley. After a minute of thought, I realized that there was a period of four months last year where I divebombed in grades and class engagement. It was contextually because I had just learned about the diaspora and was rabbit-holing everything related to the disparate rationality blog. I was also rereading HPMoR. I think the historic reason for my failure was two-fold. The realization that the community wasn’t dead and I wasn’t alone; and the overemphasis the sequences and HPMoR had placed on epistemic rationality. I certainly learned how to have more accurate beliefs about the world, but I didn’t realize there was a cost to optimizing. Come to think of it I didn’t even conceptualize optimizing as a thing people could actually do in the real world. How scary.
I’ve got to agree with your sentiment about sequences and engagement. I’ve found I have a reluctance to read something that is explicitly a sequence. You may consider uniquely labeling the rest of the hammertime posts and only compiling them afterwards.
Meta:
I have a question. Er, not so much a single question as a concept with a strong sense of confusion attached to it. What made you decide on the tools you decided to cover? Do you believe all of these techniques are low-hanging, or are they more advanced? What would other advanced techniques look like? I know that other people have attempted such things in the past (with varying degrees of success). I remember reading somewhere (unfortunately I can’t recall the source nor the exact quote) that three or four people on that particular thread had strongly considered creating a primer instrumental sequence.
The main issue was something along the lines of ‘who am I writing this for?’ Anyone reading the sequence would most likely already be famailiar with the material (a la someone on LW) and thus not really need the tools and advice (on account of the information being fairly wide-spread and easily accessable in the rationalsphere). On the other hand anyone who could really use the information is probably not going to be the kind of person reading this material in the first place.
Important note! This is not a criticism of your efforts here. This has been a concern of mine for couple of months and it’s extremely gratifying to see someone trying to fix the problem I mentioned in the first paragraph. I suspect communicating the importance of both sides of the rational coin will be a productive way to break part of the uncanny valley. I’m extremely happy that we have more people actually engaging in rationality practice, and generating usable content, and I applaud you for your efforts. The above is just my feeble attempt at asking what direction the steepest gradient for rationality (instrumental or otherwise) is current in. I have a sense that more is possible, but where?
A bunch of helpful comments on this thread already, I’ll collect my all my thoughts so far here, in no particular order:
I know people have done/thought about writing such a thing in the past, but I approached this sequence in a completely different way. It was my New Year’s Resolution to write productively every day for a year, as this seemed like the single most easily achievable and powerful mind level up I could attain. I made a bunch of progress in January but felt my thoughts going all over the place. Hammertime was the product of me sitting down for half an hour and figuring out a simple outline for writing with some system and regularity. Because I was already writing every day, the actual writing part was completely cost-free to me, whereas it sounds like possibly the major roadblocks in previous attempts.
My reasons for writing this sequence were, in clear order of importance: (a) to practice writing, (b) to review CFAR techniques for my own benefit, (c) to entertain, and (d) to teach instrumental rationality. With regards to (c), I have a belief that once you have decided on a thought to write about, the primary focus of the writing process is to make it as entertaining to read as possible, and this should come before making it useful.
The main issue was something along the lines of ‘who am I writing this for?’ Anyone reading the sequence would most likely already be famailiar with the material (a la someone on LW) and thus not really need the tools and advice (on account of the information being fairly wide-spread and easily accessable in the rationalsphere). On the other hand anyone who could really use the information is probably not going to be the kind of person reading this material in the first place.
I strongly disagree with this statement. I’ve been reading LW for about five years and “knew” about most of these tools abstractly without ever getting anything practical out of it. You can check the comments to Hammers and Nails and extrapolate that even longtime LWers have each only practiced only 5% of all the techniques we have, and that 5% varies wildly from person to person. It’s not clear if what I’ve written so far actually helps in this direction, but I think a properly written sequence will actually inject readers with the moral fire to do the thing that they’ve known about for years.
This is part of the reason (the other reason being laziness) that I’ve eschewed science and citations—I think most LWers can produce on demand the psychological science to back up most of my techniques and claims, and adding them is low value. Some of this work has been filled in in the comments.
On the uncanny valley, I think what you’re referring to is something different and benign: unavoidable opportunity cost, not so much practicing rationality causally making you worse at life. To me the uncanny valley refers to about three phenomena (that have all happened to me):
People becoming hyperaware of status games and deciding to play them consciously in a calculated, off-putting way. Anecdotally this seems to be a relatively common failure mode for students who go through SPARC. This is also my read on a good deal of unproductive LW conversation. Hint: high status isn’t always better!
People learning about biases and deciding to solve them by pasting a post-processing fix on top(e.g. double all time estimates to fix planning fallacy), or making all decisions by System 2 because they no longer trust System 1. This is sort of classic “Straw Vulcan.” One of the big failures of the Sequences, imo, is not detailing the important and serious reasons that many biases can actually be quite locally useful, even if they cause problems in areas they’re not designed for. We need more Chesterton’s Fence.
Rationalists beginning to see non-rationalists as normies, NPCs, or otherwise sub-human. It’s not uncommon to see conversations and word choice on LW like: using “aliveness” or “more human” as largely synonymous to being better at rationality, saying in perfect seriousness “it’s impossible to talk to normies because they have no concept of objective truth,” or putting oneself in the tiny set of unlikely heros who are “actually trying.”
After thinking about your reply for a while, you’ve made me update strongly towards believing that I had overestimated my own efficacy. In particular:
I’ve been reading LW for about five years and “knew” about most of these tools abstractly without ever getting anything practical out of it. You can check the comments to Hammers and Nailsand extrapolate that even longtime LWers have each only practiced only 5% of all the techniques we have, and that 5% varies wildly from person to person. It’s not clear if what I’ve written so far actually helps in this direction, but I think a properly written sequence will actually inject readers with the moral fire to do the thing that they’ve known about for years.
This struck me as I hadn’t considered that I was missing so many tools. I feel like I’ve had my life improved a lot by rationality. I can cite many many cases where things get resolved specifically because I have the proper information and training. Yet even with everything I currently do, I now realize that I don’t impliment the vast majority of useful things we as a community have managed to come up with. If one of your primary goals with this series it to imbue people with some of that “moral fire”, then consider it a success, at least regarding my personal experience. One thing to note is that while the series itself sets the context for this sort of thing to happen. It was only the meta-level conversation, and that specific information about inadequacy that helped me viscerally update. That might be series-relevant information.
Anyway, thanks for helping me fix my models and pointing out a glaring blind-spot.
At the beginning of this project I said to myself that I would be happy if it moved one other person in a signficant way. Right now, I’m very happy =).
I’ll try to inject more of this moral fire into the coming cycles. One thing that I’ve come to understand is that people really don’t aim high enough. There’s a mindset where you try to self-improve until you reach a satisfactory level, something like 90th percentile among the peer group, and relax there. There’s an alternative mindset where you believe that the better you are, the quicker you will improve by learning new tools, since each of them is a force multiplier. If you follow this mindset far enough, it’s almost dizzying.
My gut reaction told me that I hadn’t gone through the experience of the uncanny valley. After a minute of thought, I realized that there was a period of four months last year where I divebombed in grades and class engagement. It was contextually because I had just learned about the diaspora and was rabbit-holing everything related to the disparate rationality blog. I was also rereading HPMoR. I think the historic reason for my failure was two-fold. The realization that the community wasn’t dead and I wasn’t alone; and the overemphasis the sequences and HPMoR had placed on epistemic rationality. I certainly learned how to have more accurate beliefs about the world, but I didn’t realize there was a cost to optimizing. Come to think of it I didn’t even conceptualize optimizing as a thing people could actually do in the real world. How scary.
I’ve got to agree with your sentiment about sequences and engagement. I’ve found I have a reluctance to read something that is explicitly a sequence. You may consider uniquely labeling the rest of the hammertime posts and only compiling them afterwards.
Meta:
I have a question. Er, not so much a single question as a concept with a strong sense of confusion attached to it. What made you decide on the tools you decided to cover? Do you believe all of these techniques are low-hanging, or are they more advanced? What would other advanced techniques look like? I know that other people have attempted such things in the past (with varying degrees of success). I remember reading somewhere (unfortunately I can’t recall the source nor the exact quote) that three or four people on that particular thread had strongly considered creating a primer instrumental sequence.
The main issue was something along the lines of ‘who am I writing this for?’ Anyone reading the sequence would most likely already be famailiar with the material (a la someone on LW) and thus not really need the tools and advice (on account of the information being fairly wide-spread and easily accessable in the rationalsphere). On the other hand anyone who could really use the information is probably not going to be the kind of person reading this material in the first place.
Important note! This is not a criticism of your efforts here. This has been a concern of mine for couple of months and it’s extremely gratifying to see someone trying to fix the problem I mentioned in the first paragraph. I suspect communicating the importance of both sides of the rational coin will be a productive way to break part of the uncanny valley. I’m extremely happy that we have more people actually engaging in rationality practice, and generating usable content, and I applaud you for your efforts. The above is just my feeble attempt at asking what direction the steepest gradient for rationality (instrumental or otherwise) is current in. I have a sense that more is possible, but where?
Respond to any of the prompts, or none of them. They’re not the point. They’re just there to illustrate the point.
A bunch of helpful comments on this thread already, I’ll collect my all my thoughts so far here, in no particular order:
I know people have done/thought about writing such a thing in the past, but I approached this sequence in a completely different way. It was my New Year’s Resolution to write productively every day for a year, as this seemed like the single most easily achievable and powerful mind level up I could attain. I made a bunch of progress in January but felt my thoughts going all over the place. Hammertime was the product of me sitting down for half an hour and figuring out a simple outline for writing with some system and regularity. Because I was already writing every day, the actual writing part was completely cost-free to me, whereas it sounds like possibly the major roadblocks in previous attempts.
My reasons for writing this sequence were, in clear order of importance: (a) to practice writing, (b) to review CFAR techniques for my own benefit, (c) to entertain, and (d) to teach instrumental rationality. With regards to (c), I have a belief that once you have decided on a thought to write about, the primary focus of the writing process is to make it as entertaining to read as possible, and this should come before making it useful.
I strongly disagree with this statement. I’ve been reading LW for about five years and “knew” about most of these tools abstractly without ever getting anything practical out of it. You can check the comments to Hammers and Nails and extrapolate that even longtime LWers have each only practiced only 5% of all the techniques we have, and that 5% varies wildly from person to person. It’s not clear if what I’ve written so far actually helps in this direction, but I think a properly written sequence will actually inject readers with the moral fire to do the thing that they’ve known about for years.
This is part of the reason (the other reason being laziness) that I’ve eschewed science and citations—I think most LWers can produce on demand the psychological science to back up most of my techniques and claims, and adding them is low value. Some of this work has been filled in in the comments.
On the uncanny valley, I think what you’re referring to is something different and benign: unavoidable opportunity cost, not so much practicing rationality causally making you worse at life. To me the uncanny valley refers to about three phenomena (that have all happened to me):
People becoming hyperaware of status games and deciding to play them consciously in a calculated, off-putting way. Anecdotally this seems to be a relatively common failure mode for students who go through SPARC. This is also my read on a good deal of unproductive LW conversation. Hint: high status isn’t always better!
People learning about biases and deciding to solve them by pasting a post-processing fix on top(e.g. double all time estimates to fix planning fallacy), or making all decisions by System 2 because they no longer trust System 1. This is sort of classic “Straw Vulcan.” One of the big failures of the Sequences, imo, is not detailing the important and serious reasons that many biases can actually be quite locally useful, even if they cause problems in areas they’re not designed for. We need more Chesterton’s Fence.
Rationalists beginning to see non-rationalists as normies, NPCs, or otherwise sub-human. It’s not uncommon to see conversations and word choice on LW like: using “aliveness” or “more human” as largely synonymous to being better at rationality, saying in perfect seriousness “it’s impossible to talk to normies because they have no concept of objective truth,” or putting oneself in the tiny set of unlikely heros who are “actually trying.”
After thinking about your reply for a while, you’ve made me update strongly towards believing that I had overestimated my own efficacy. In particular:
This struck me as I hadn’t considered that I was missing so many tools. I feel like I’ve had my life improved a lot by rationality. I can cite many many cases where things get resolved specifically because I have the proper information and training. Yet even with everything I currently do, I now realize that I don’t impliment the vast majority of useful things we as a community have managed to come up with. If one of your primary goals with this series it to imbue people with some of that “moral fire”, then consider it a success, at least regarding my personal experience. One thing to note is that while the series itself sets the context for this sort of thing to happen. It was only the meta-level conversation, and that specific information about inadequacy that helped me viscerally update. That might be series-relevant information.
Anyway, thanks for helping me fix my models and pointing out a glaring blind-spot.
At the beginning of this project I said to myself that I would be happy if it moved one other person in a signficant way. Right now, I’m very happy =).
I’ll try to inject more of this moral fire into the coming cycles. One thing that I’ve come to understand is that people really don’t aim high enough. There’s a mindset where you try to self-improve until you reach a satisfactory level, something like 90th percentile among the peer group, and relax there. There’s an alternative mindset where you believe that the better you are, the quicker you will improve by learning new tools, since each of them is a force multiplier. If you follow this mindset far enough, it’s almost dizzying.