Following Swerve’s example above, I’ve also decided to try out your exercise and post my results. My favorite instrumental rationality technique is Oliver Habryka’s Fermi Modeling. The way I usually explain it (with profuse apologies to Habryka for possibly butchering the technique) is that you quickly generate models of the problem using various frameworks and from various perspectives, then weighting the conclusions of those models based on how closely they seem to conform to reality. (@habryka, please correct me if this is not what Fermi Modeling is.)
For your exercise, I’ll try to come up with variants/applications of Fermi modeling that are useful in other contexts.
Instead of using different perspectives or frameworks, take one framework and vary the inputs, then weight the conclusions drawn by how likely the inputs are, as well as how consistent they are with the data.
Likewise, instead of checking one story on either side when engaged in Pyrrhonian skepticism, tell a bunch of stories that are consistent with either side, then weight them by how likely the stories are.
To test what your mental model actually says, try varying parts of the model inputs/outputs randomly and see which combinations fit well/horribly with your model.
When working in domains where you have detailed mental simulations (for example dealing with people you’re very familiar with, or for simple manual tasks such as picking up a glass of water), instead of using the inner sim technique once with the most likely/most available set of starting conditions, do as many simulations as possible and weight them based on how likely the starting conditions are.
When doing reference class forecasting, vary the reference class used to test for model robustness.
Instead of answering with a gut feeling directly for a probability judgment for a given thing, try to imagine different possibilities under which the thing happens or doesn’t happen, and then vary the specific scenarios (then simulate them in your head) to see how robust each possibility is. Come up with your probability judgment after consulting the result of these robustness checks.
When I am developing and testing (relatively easy to communicate) rationality techniques in the future, I will try to vary the technique in different ways when presenting them to people, and see how robust the technique is to different forms of noise.
I should do more mental simulations to calibrate myself on how good the actions I didn’t take were, instead of just relying on my gut feeling/how good other people who took those actions seem to be doing.
Instead of using different perspectives or frameworks, I could do Fermi modeling with different instrumental rationality techniques when approaching a difficult problem. I would quickly go through my list of instrumental rationality techniques, then weight the suggestions made by each of them based on how applicable the technique is to the specific problem I’m stuck on.
Recently, I’ve been reading a lot of biographies/auto-biographies from great scientists in the 20th century, for example Feynman and James Watson. When encountering a novel scientific problem, instead of only thinking about what the most recently read-about scientist would say, I should keep a list of scientists whose thought processes have been inspirational to me, and try to imagine what each of the scientists would do, weighting them by how applicable (my mental model of) their experiences are to the specific problem.
I guess Fermi modeling isn’t so much a single hammer, as much as the “hammer” of the nail mindset. So some of the applications or variants I generated above seem to be ways of applying more hammers to a fixed nail, instead of applying the same fixed hammer to different nails.
The simple explaination makes sense. However I’m sure there’s a lot more to this than the is conveyed in one-sentence. I’d really like to get my hands on a more in-depth explaination if it’s possible. A google search of the term “Fermi Modeling” as well as searching your LW post history has not yielded anything. Is there a post somewhere I can read?
I second the interest in #10. Benjamin Franklin famously employed this strategy with philosophers and rhetoricians, by writing essays in the famous person’s style and then comparing with source material to see how successful he was.
Related to #10, I’ve found that building up understanding of complex topics (e.g., physics, mathematics, machine learning, etc.) is unusually enhanced by following the history of their development. Especially in mathematical topics, where the drive for elegant proofs leads to presentations that strip away the messy history of all the cognitive efforts that went into solving the problem in the first place.
I suppose this is really just an unconventional application of the general principle of learning from history.
I think there’s a lot of the intuitions and thought processes that let you come up with new discoveries in mathematics and machine learning that aren’t generally taught in classes or covered in textbooks. People are also quite bad at conveying their intuitions behind topics directly when asked to in Q&As and speeches. I think that at least in machine learning, hanging out with good ML researchers teaches me a lot about how to think about problems, in a way that I haven’t been able to get even after reading their course notes and listening to their presentations. Similarly, I suspect that autobiographies may help convey the experience of solving problems in a way that actually lets you learn the intuitions or thought processes used by the author.
#10 is extremely thought-provoking, I wonder how much lost intuition is buried in “flavor of the month” scientific fields and approaches of history. Do you have examples of special features of Feynman’s and Watson’s (say) approaches?
The main distinguishing thing about Feynman, at least from reading Feynman’s two autobiographies, seemed to be how irreverent he is. He doesn’t do science because it’s super important, he does science he finds fun or interesting. He is constantly going on rants about the default way of looking at things (at least his inner monologue is) and ignoring authority, whether by blowing up at the science textbooks he was asked to read, ignoring how presidential committees traditionally functioned, or disagreeing with doctors. He goes to strip clubs because he likes interacting with pretty girls. It’s really quite different from the rather stodgy utilitarian/outside mindset I tend to reference by default, and I think reading his autobiographies me a lot more of what Critch calls “Entitlement to believe” .
When I adopt this “Feynman mindset” in my head, this feels like letting my inner child out. I feel like I can just go and look at things and form hypotheses and ask questions, irrespective of what other people think. I abandon the feeling that I need to do what is immediately important, and instead go look at what I find interesting and fun.
From Watson’s autobiography, I mainly got a sense of how even great scientists are drive a lot by petty desires, such as the fear that someone else would beat them to a discovery, or how annoying your collaborators are. For example, it seemed that a driving factor for Watson and Crick’s drive to work on DNA was the fear that Linus Pauling would discover the true structure first. A lot of their failure to collaborate better with Rosalind Franklin was due to personal clashes with her. Of course, Watson does also display some irreverence to authority; he held fast to his belief that their approach to finding the structure of DNA would work, even when multiple more senior scientists disagreed with him. But I think the main thing I got out of the book was a visceral appreciation for how important social situations are for motivating even important science.
When I adopt this “Watson mindset” in my head, I think about the social situation I’m in, and use that to motivate me. I call upon the irritation I feel when people are just acting a little too suboptimal, or that people are doing things for the wrong reasons. I see how absolutely easy many of the problems I’m working on are, and use my irritation at people having thus failed to solve them to push me to work harder. This probably isn’t a very healthy mindset to have in the long term, and there are obvious problems with it, but it feels very effective to get me to push past schleps.
Following Swerve’s example above, I’ve also decided to try out your exercise and post my results. My favorite instrumental rationality technique is Oliver Habryka’s Fermi Modeling. The way I usually explain it (with profuse apologies to Habryka for possibly butchering the technique) is that you quickly generate models of the problem using various frameworks and from various perspectives, then weighting the conclusions of those models based on how closely they seem to conform to reality. (@habryka, please correct me if this is not what Fermi Modeling is.)
For your exercise, I’ll try to come up with variants/applications of Fermi modeling that are useful in other contexts.
Instead of using different perspectives or frameworks, take one framework and vary the inputs, then weight the conclusions drawn by how likely the inputs are, as well as how consistent they are with the data.
Likewise, instead of checking one story on either side when engaged in Pyrrhonian skepticism, tell a bunch of stories that are consistent with either side, then weight them by how likely the stories are.
To test what your mental model actually says, try varying parts of the model inputs/outputs randomly and see which combinations fit well/horribly with your model.
When working in domains where you have detailed mental simulations (for example dealing with people you’re very familiar with, or for simple manual tasks such as picking up a glass of water), instead of using the inner sim technique once with the most likely/most available set of starting conditions, do as many simulations as possible and weight them based on how likely the starting conditions are.
When doing reference class forecasting, vary the reference class used to test for model robustness.
Instead of answering with a gut feeling directly for a probability judgment for a given thing, try to imagine different possibilities under which the thing happens or doesn’t happen, and then vary the specific scenarios (then simulate them in your head) to see how robust each possibility is. Come up with your probability judgment after consulting the result of these robustness checks.
When I am developing and testing (relatively easy to communicate) rationality techniques in the future, I will try to vary the technique in different ways when presenting them to people, and see how robust the technique is to different forms of noise.
I should do more mental simulations to calibrate myself on how good the actions I didn’t take were, instead of just relying on my gut feeling/how good other people who took those actions seem to be doing.
Instead of using different perspectives or frameworks, I could do Fermi modeling with different instrumental rationality techniques when approaching a difficult problem. I would quickly go through my list of instrumental rationality techniques, then weight the suggestions made by each of them based on how applicable the technique is to the specific problem I’m stuck on.
Recently, I’ve been reading a lot of biographies/auto-biographies from great scientists in the 20th century, for example Feynman and James Watson. When encountering a novel scientific problem, instead of only thinking about what the most recently read-about scientist would say, I should keep a list of scientists whose thought processes have been inspirational to me, and try to imagine what each of the scientists would do, weighting them by how applicable (my mental model of) their experiences are to the specific problem.
I guess Fermi modeling isn’t so much a single hammer, as much as the “hammer” of the nail mindset. So some of the applications or variants I generated above seem to be ways of applying more hammers to a fixed nail, instead of applying the same fixed hammer to different nails.
Awesome! Glad to have made it into your top techniques! The explanation seems as good as a one-sentence explanation of Femi Modeling gets.
The simple explaination makes sense. However I’m sure there’s a lot more to this than the is conveyed in one-sentence. I’d really like to get my hands on a more in-depth explaination if it’s possible. A google search of the term “Fermi Modeling” as well as searching your LW post history has not yielded anything. Is there a post somewhere I can read?
I second the interest in #10. Benjamin Franklin famously employed this strategy with philosophers and rhetoricians, by writing essays in the famous person’s style and then comparing with source material to see how successful he was.
Related to #10, I’ve found that building up understanding of complex topics (e.g., physics, mathematics, machine learning, etc.) is unusually enhanced by following the history of their development. Especially in mathematical topics, where the drive for elegant proofs leads to presentations that strip away the messy history of all the cognitive efforts that went into solving the problem in the first place.
I suppose this is really just an unconventional application of the general principle of learning from history.
I think there’s a lot of the intuitions and thought processes that let you come up with new discoveries in mathematics and machine learning that aren’t generally taught in classes or covered in textbooks. People are also quite bad at conveying their intuitions behind topics directly when asked to in Q&As and speeches. I think that at least in machine learning, hanging out with good ML researchers teaches me a lot about how to think about problems, in a way that I haven’t been able to get even after reading their course notes and listening to their presentations. Similarly, I suspect that autobiographies may help convey the experience of solving problems in a way that actually lets you learn the intuitions or thought processes used by the author.
Some of those are definitely stretching. =P
#10 is extremely thought-provoking, I wonder how much lost intuition is buried in “flavor of the month” scientific fields and approaches of history. Do you have examples of special features of Feynman’s and Watson’s (say) approaches?
Yeah, I agree on the stretching point.
The main distinguishing thing about Feynman, at least from reading Feynman’s two autobiographies, seemed to be how irreverent he is. He doesn’t do science because it’s super important, he does science he finds fun or interesting. He is constantly going on rants about the default way of looking at things (at least his inner monologue is) and ignoring authority, whether by blowing up at the science textbooks he was asked to read, ignoring how presidential committees traditionally functioned, or disagreeing with doctors. He goes to strip clubs because he likes interacting with pretty girls. It’s really quite different from the rather stodgy utilitarian/outside mindset I tend to reference by default, and I think reading his autobiographies me a lot more of what Critch calls “Entitlement to believe” .
When I adopt this “Feynman mindset” in my head, this feels like letting my inner child out. I feel like I can just go and look at things and form hypotheses and ask questions, irrespective of what other people think. I abandon the feeling that I need to do what is immediately important, and instead go look at what I find interesting and fun.
From Watson’s autobiography, I mainly got a sense of how even great scientists are drive a lot by petty desires, such as the fear that someone else would beat them to a discovery, or how annoying your collaborators are. For example, it seemed that a driving factor for Watson and Crick’s drive to work on DNA was the fear that Linus Pauling would discover the true structure first. A lot of their failure to collaborate better with Rosalind Franklin was due to personal clashes with her. Of course, Watson does also display some irreverence to authority; he held fast to his belief that their approach to finding the structure of DNA would work, even when multiple more senior scientists disagreed with him. But I think the main thing I got out of the book was a visceral appreciation for how important social situations are for motivating even important science.
When I adopt this “Watson mindset” in my head, I think about the social situation I’m in, and use that to motivate me. I call upon the irritation I feel when people are just acting a little too suboptimal, or that people are doing things for the wrong reasons. I see how absolutely easy many of the problems I’m working on are, and use my irritation at people having thus failed to solve them to push me to work harder. This probably isn’t a very healthy mindset to have in the long term, and there are obvious problems with it, but it feels very effective to get me to push past schleps.