Man, this post is special to me, because it is one of the most powerful rationality tools I know of and was the backbone of my own high-growth intro to rationality. So while I want to gesture at a patch for one problem I expect you will run into, I’m not that sure it’s even worth paying attention to right now. Just running with the current skill seems fantastic. But, here’s my concern for the future.
When you say “stop saying wrong things”, most of your examples use logic to define “wrongness”. Which makes sense. If you intend to learn Chinese and you don’t check how long it will take you, but in fact you will never finish, logically you made a mistake. But if you intend on helming Etsy and don’t run A/B tests to provide a feedback loop for whether you were correct about customer desire, you also logically made a mistake—despite the fact that I think this is a perfectly reasonable decision. I expect a fair number of these false positives to come up given the way you describe your current filter.
Before we get to what is going wrong, first we should examine a bit of what’s going right. You’re taking a situation where you have an urge to do your standard behavior, and noticing that you can pattern-match it to a case where people often do wrong things as defined by biases research, logic, and the people whom you read. These sources have a very good track record in many domains. When you then switch your behavior, you will often be right, as well as having explored some new behavior.
However, there are certain domains or classes of thing where logic does not do especially well, and likewise in which many people you read will probably give good-sounding advice which turns out to be wrong. I think startups is probably one of them. Zvi’s post about Leaders of Men makes this point in a way which I like, using the example of baseball managers. There are definitely lots of “dumb” things that managers do that are easy to point out by a logician from the sidelines. These cost them some games. But the mistakes are driven by policies that are actually very powerful, more beneficial than the costs imposed on the games. I think the A/B testing example fits this. Yes, it helps, but it’s not as helpful as running with other more important action policies, perhaps exploring design space or letting your designers’ intuitions run or just focusing resources on management or who-knows-what (they probably do, though). A/B testing is optimizing, and you don’t want to commit the sin of premature optimization. 5 years in sounds reasonably less premature to me.
So, to try to put a point on what goes wrong: logic has its weak points, as any straw-postmodernist will tell you, but they’re obviously wrong if they say logic isn’t patchable. But just because their pendulum has swung too far doesn’t mean there aren’t some classic mistakes with overapplying straw logic. I think premature optimization is a really good example. I think “ignoring complexity” is another good example. Garbage-in garbage-out modeling is another good example. A huge number of “biases” I think are actually the correct thing to do/think a significant fraction of the time. I think social skills, sports, dancing, music, politics, system design, etc are all sandboxes of complex domains in which logic doesn’t work very well in practical usage, and we indeed see tons of mistakes in them by both straw-rationalists and real ones.
Maybe I sound like I’m preaching to the choir here. But I think there’s a subtle-ish point I would still like to get across: if you override a behavior with logic, the original behavior was basically always in place for a very good reason. Your behaviors are built on each other. Immediately stopping a behavior will hurt behaviors on top of it or supported by it. For the general example, halting “saying wrong things” may cause you to stop putting models out there to be destroyed by reality, which could hamper the feedback and growth process. There are a plethora of more specific ones (e.g. halting “using little white lies” is great to explore but can often cause jarring but hard-to-identify ways people won’t feel as comfortable with you).
I think the solution here is something like “while you’re exploring what logic says to do instead, also explore heavily all the reasons the first action was being done, because your neural net is complex as shit and who knows what processes you may accidentally deprecate” (and sometimes, you’ll find fascinating new subgoals and important dynamics you didn’t know existed!). Running with the logical action is great because it’s growthy and you can smash the model into reality, but *don’t cling onto the naively logical model if reality if reality is hinting it’s more complex than that*. That’s the sinkhole to really avoid. Any other mistakes are fixable.
(grammatical edits for clarity)
Man, this post is special to me, because it is one of the most powerful rationality tools I know of and was the backbone of my own high-growth intro to rationality. So while I want to gesture at a patch for one problem I expect you will run into, I’m not that sure it’s even worth paying attention to right now. Just running with the current skill seems fantastic. But, here’s my concern for the future.
When you say “stop saying wrong things”, most of your examples use logic to define “wrongness”. Which makes sense. If you intend to learn Chinese and you don’t check how long it will take you, but in fact you will never finish, logically you made a mistake. But if you intend on helming Etsy and don’t run A/B tests to provide a feedback loop for whether you were correct about customer desire, you also logically made a mistake—despite the fact that I think this is a perfectly reasonable decision. I expect a fair number of these false positives to come up given the way you describe your current filter.
Before we get to what is going wrong, first we should examine a bit of what’s going right. You’re taking a situation where you have an urge to do your standard behavior, and noticing that you can pattern-match it to a case where people often do wrong things as defined by biases research, logic, and the people whom you read. These sources have a very good track record in many domains. When you then switch your behavior, you will often be right, as well as having explored some new behavior.
However, there are certain domains or classes of thing where logic does not do especially well, and likewise in which many people you read will probably give good-sounding advice which turns out to be wrong. I think startups is probably one of them. Zvi’s post about Leaders of Men makes this point in a way which I like, using the example of baseball managers. There are definitely lots of “dumb” things that managers do that are easy to point out by a logician from the sidelines. These cost them some games. But the mistakes are driven by policies that are actually very powerful, more beneficial than the costs imposed on the games. I think the A/B testing example fits this. Yes, it helps, but it’s not as helpful as running with other more important action policies, perhaps exploring design space or letting your designers’ intuitions run or just focusing resources on management or who-knows-what (they probably do, though). A/B testing is optimizing, and you don’t want to commit the sin of premature optimization. 5 years in sounds reasonably less premature to me.
So, to try to put a point on what goes wrong: logic has its weak points, as any straw-postmodernist will tell you, but they’re obviously wrong if they say logic isn’t patchable. But just because their pendulum has swung too far doesn’t mean there aren’t some classic mistakes with overapplying straw logic. I think premature optimization is a really good example. I think “ignoring complexity” is another good example. Garbage-in garbage-out modeling is another good example. A huge number of “biases” I think are actually the correct thing to do/think a significant fraction of the time. I think social skills, sports, dancing, music, politics, system design, etc are all sandboxes of complex domains in which logic doesn’t work very well in practical usage, and we indeed see tons of mistakes in them by both straw-rationalists and real ones.
Maybe I sound like I’m preaching to the choir here. But I think there’s a subtle-ish point I would still like to get across: if you override a behavior with logic, the original behavior was basically always in place for a very good reason. Your behaviors are built on each other. Immediately stopping a behavior will hurt behaviors on top of it or supported by it. For the general example, halting “saying wrong things” may cause you to stop putting models out there to be destroyed by reality, which could hamper the feedback and growth process. There are a plethora of more specific ones (e.g. halting “using little white lies” is great to explore but can often cause jarring but hard-to-identify ways people won’t feel as comfortable with you).
I think the solution here is something like “while you’re exploring what logic says to do instead, also explore heavily all the reasons the first action was being done, because your neural net is complex as shit and who knows what processes you may accidentally deprecate” (and sometimes, you’ll find fascinating new subgoals and important dynamics you didn’t know existed!). Running with the logical action is great because it’s growthy and you can smash the model into reality, but *don’t cling onto the naively logical model if reality if reality is hinting it’s more complex than that*. That’s the sinkhole to really avoid. Any other mistakes are fixable.
Excited to see where this takes you.