If all you have is a hammer, everything looks like a nail.
The most important idea I’ve blogged about so far is Taking Ideas Seriously, which is itself a generalization of Zvi’s More Dakka. This post is an elaboration of how to fully integrate a new idea.
I draw a dichotomy between Hammers and Nails:
A Hammer is someone who picks one strategy and uses it to solve as many problems as possible.
A Nail is someone who picks one problem and tries all the strategies until it gets solved.
Human beings are generally Nails, fixating on one specific problem at a time and throwing their entire toolkit at it. A Nail gets good at solving important problems slowly and laboriously but can fail to recognize the power and generality of his tools.
Sometimes it’s better to be a Hammer. Great advice is always a hammer: an organizing principle that works across many domains. To get the most mileage out of a single hammer, don’t stop at using it to tackle your current pet problem. Use it everywhere. Ideas don’t get worn down from use.
Regardless of which you are at a given moment, be systematic because Choices are Bad.
Only a Few Tricks
I am reminded of a classic speech of the mathematician Gian-Carlo Rota. His fifth point is to be a Hammer (emphasis mine):
A long time ago an older and well known number theorist made some disparaging remarks about Paul Erdos’ work. You admire contributions to mathematics as much as I do, and I felt annoyed when the older mathematician flatly and definitively stated that all of Erdos’ work could be reduced to a few tricks which Erdos repeatedly relied on in his proofs. What the number theorist did not realize is that other mathematicians, even the very best, also rely on a few tricks which they use over and over. Take Hilbert. The second volume of Hilbert’s collected papers contains Hilbert’s papers in invariant theory. I have made a point of reading some of these papers with care. It is sad to note that some of Hilbert’s beautiful results have been completely forgotten. But on reading the proofs of Hilbert’s striking and deep theorems in invariant theory, it was surprising to verify that Hilbert’s proofs relied on the same few tricks. Even Hilbert had only a few tricks!
The greatest mathematicians of all time created vast swathes of their work by applying a single precious technique to every problem they could find. My favorite book of mathematics is The Probabilistic Method, by Alon and Spencer. It never ceases to amaze me that this same method applies to:
(The Erdős–Kac Theorem) The number of distinct prime factors of a random integer between and behaves like a normal distribution with mean and variance .
(Heilbronn’s Triangle Problem) What is the maximum for which there exist points in the unit square, no three of which form a triangle with area less than ?
(The Erdős-Rényi Phase Transition) A typical random graph where each edge exists with probability has connected components of size . A typical random graph where each edge exists with probability has a giant component of size linear in .
It’s amusing to note that in the same speech, Rota expounded the benefits of being a Nail just two points later:
Richard Feynman was fond of giving the following advice on how to be a genius. You have to keep a dozen of your favorite problems constantly present in your mind, although by and large they will lay in a dormant state. Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps. Every once in a while there will be a hit, and people will say: “How did he do it? He must be a genius!”
Both mindsets are vital.
To be a Nail is to study a single problem from every angle. It is often the case that each technique sheds light on only one side of the problem, and by circumambulating it via the application of many hammers at once, one corners the problem in a deep way. This remains true well past a problem’s resolution—insight can continue to be drawn from it as other methods are applied and more satisfying proofs attained.
Usually even the failure of certain techniques sheds light on shape of the difficulty. One classic example of an enlightening failure is the consistent overcounting (by exactly a factor of two!) of primes by sieve methods. This failure is so serious and unfixable that it has its own name: the Parity Problem.
Dually, to be a Hammer is to study a single technique from every angle. In the case of the probabilistic method, a breadth of cheap applications were found immediately by simply systematically studying uniform random constructions. However, particularly adept Hammers like Erdős upgraded the basic method into a superweapon by steadfastly applying it to harder and harder problems. Variations of the Probabilistic Method like the Lovász Local Lemma, Shearer’s entropy lemma, and the Azuma-Hoeffding inequality are now canon due to the persistence of Hammers.
Be Systematic
The upshot is not that Hammers are better than Nails. Rather, there is a place for both Hammers and Nails, and in particular both mindsets are far superior to the wishy-washy blind meandering that characterizes overwhelmed novices. There may be an endless supply of advice—even great advice—on the internet, and yet any given person should organize their life around systematically applying a few tricks or solving a few problems.
Taking an idea seriously is difficult and expensive. You’ll have to tear down competing mental real estate and build a whole new palace for it. You’ll have to field test it all over the place without getting superstitious. You’ll have to gently titrate for the amount you need until you have enough Dakka.
Therefore, be a Hammer and make that idea pay rent. Hell, you’re the president, the emperor, the king. There’s no rent control in your head! Get that idea for all its got.
Exercise for the reader: all things have their accustomed uses. Give me ten unaccustomed uses of your favorite instrumental rationality technique! (Bonus points for demonstrating intent to kill.)
Wow this was a lot harder than I expected. I thought about my favorite technique for a solid fifteen minutes before deciding on this. Mostly because I realized that I don’t have explicit techniques that I usually use. I definitely use certain things like Immunity to Change Mapping, the Ideological Turing Test, and Neutral Hours. However I think (78%) the majority of the benefit I’ve gained from rationality lies in a single hammer, or the ability to update from evidence and think about the best option in a scenario. This mostly happens on the 5-second level although I will quite frequently (multiple times a day) stop to viscerally update my priors.
A rather unsettling realization to find that most of my success in making more rational decisions stems from this one place. I’m not sure whether to be distraught or happy. On one hand this could be an indication that I’m very poorly diversified in my skills, or don’t have enough of them on the reflex level yet. On the other hand this could indicate the value of an approach like A Thousand Heads in a Row suggests; optimizing many small decisions such that they chain together to form systematic and repeatable progress.
The list below is there for the sake of completeness and in hopes that it encourages other people to try the exercise.
Technique: Bayesian Expected Value Calculation
1.) Combining with Murphyjutsu to determine the leverage points on any plans I make
2.) Sifting through a class syllabus and schedule to determine the best use of my time towards the desired grade
3.) Realizing that my urge to read something else and not do the exercise was a policy statement about what I expected myself to do in the future. Doing the exercise makes me more likely to do other exercises in the future and embrace the singularity mindset.
4.) Trying new foods, or suppliments in response to the value of information
5.) I should get a flu shot
6.) Trying to determine the best way to lead a conversation or extract information from someone
7.) Realizing I might need diversified thinking skills
8.) Realizing I should explictly model my rationality techniques
9.) Checking the value and cost of my current action vs my ideal action at regular intervals.
10.) Realizing there is probably a good chance this technique can be upgraded by some means
You are my hero. I gave up on my own exercise after 2 examples.
+1 for actually doing the exercise!
One of my favorite optimization techniques is currency conversions—valuing a bunch of different things with a single number that can be used to make trade-offs. This technique has pitfalls, but it does a lot better than the naive human approach of just comparing alternatives based on a single variable and ignoring all the others.
Here are 12 (!) applications of currency conversions, roughly ordered from most to least obvious:
Valuing my time, and the advanced techniques of adjusting for quality of time spent.
Comparing purchases by valuing attributes separately. For example, when buying a PC I may think of how much I’m willing to pay per GB of RAM, per GHz of speed etc.
Comparing purchases, even wildly different ones like better soap vs. ski vacation, by converting everything to hedons/$. A useful way to do it is hedons/minute x minutes enjoyed/thing x things/$.
From the above link, you can also compare experiences by converting everything to “minutes experienced x quality of experience”. This helps reconcile the experiencing self with the remembering self, if you take the latter to be a collection of reminiscing experiencing selves.
Effective altruism.
Related: avoiding sacred values. What’s more important, the dignity of a minimum wage worker or their chance of finding a job? Instead of getting stuck on sacred values, convert and compare.
Net present value calculations: converting different forms of money into a single currency. A safe dollar today > risky dollar tomorrow. This helps think of investments, loans (which are basically anti-investments) and savings.
NPV is not just about pure dollars. Should you take the job that pays more today or the one that looks better on your resume? That depends on the NPV of that resume improvement, which is worth more to a 30-year-old than to a 50-year-old.
When I was skinny and broke, I optimized for “amount of food / $”. Today I’m neither, so I optimize for “enjoyment of food / adverse-weighted calorie”. The denominator reflects that a protein calorie in the morning is “cheaper” than a sugar calorie before bed.
The pleasure I get from me-doing-me and the price I will pay in weirdness points.
Using decision matrices to compare >10 variables when choosing which house to rent or car to buy.
Using decision matrices to compare 25 variables when choosing which woman to marry.
Following Swerve’s example above, I’ve also decided to try out your exercise and post my results. My favorite instrumental rationality technique is Oliver Habryka’s Fermi Modeling. The way I usually explain it (with profuse apologies to Habryka for possibly butchering the technique) is that you quickly generate models of the problem using various frameworks and from various perspectives, then weighting the conclusions of those models based on how closely they seem to conform to reality. (@habryka, please correct me if this is not what Fermi Modeling is.)
For your exercise, I’ll try to come up with variants/applications of Fermi modeling that are useful in other contexts.
Instead of using different perspectives or frameworks, take one framework and vary the inputs, then weight the conclusions drawn by how likely the inputs are, as well as how consistent they are with the data.
Likewise, instead of checking one story on either side when engaged in Pyrrhonian skepticism, tell a bunch of stories that are consistent with either side, then weight them by how likely the stories are.
To test what your mental model actually says, try varying parts of the model inputs/outputs randomly and see which combinations fit well/horribly with your model.
When working in domains where you have detailed mental simulations (for example dealing with people you’re very familiar with, or for simple manual tasks such as picking up a glass of water), instead of using the inner sim technique once with the most likely/most available set of starting conditions, do as many simulations as possible and weight them based on how likely the starting conditions are.
When doing reference class forecasting, vary the reference class used to test for model robustness.
Instead of answering with a gut feeling directly for a probability judgment for a given thing, try to imagine different possibilities under which the thing happens or doesn’t happen, and then vary the specific scenarios (then simulate them in your head) to see how robust each possibility is. Come up with your probability judgment after consulting the result of these robustness checks.
When I am developing and testing (relatively easy to communicate) rationality techniques in the future, I will try to vary the technique in different ways when presenting them to people, and see how robust the technique is to different forms of noise.
I should do more mental simulations to calibrate myself on how good the actions I didn’t take were, instead of just relying on my gut feeling/how good other people who took those actions seem to be doing.
Instead of using different perspectives or frameworks, I could do Fermi modeling with different instrumental rationality techniques when approaching a difficult problem. I would quickly go through my list of instrumental rationality techniques, then weight the suggestions made by each of them based on how applicable the technique is to the specific problem I’m stuck on.
Recently, I’ve been reading a lot of biographies/auto-biographies from great scientists in the 20th century, for example Feynman and James Watson. When encountering a novel scientific problem, instead of only thinking about what the most recently read-about scientist would say, I should keep a list of scientists whose thought processes have been inspirational to me, and try to imagine what each of the scientists would do, weighting them by how applicable (my mental model of) their experiences are to the specific problem.
I guess Fermi modeling isn’t so much a single hammer, as much as the “hammer” of the nail mindset. So some of the applications or variants I generated above seem to be ways of applying more hammers to a fixed nail, instead of applying the same fixed hammer to different nails.
Awesome! Glad to have made it into your top techniques! The explanation seems as good as a one-sentence explanation of Femi Modeling gets.
The simple explaination makes sense. However I’m sure there’s a lot more to this than the is conveyed in one-sentence. I’d really like to get my hands on a more in-depth explaination if it’s possible. A google search of the term “Fermi Modeling” as well as searching your LW post history has not yielded anything. Is there a post somewhere I can read?
I second the interest in #10. Benjamin Franklin famously employed this strategy with philosophers and rhetoricians, by writing essays in the famous person’s style and then comparing with source material to see how successful he was.
Related to #10, I’ve found that building up understanding of complex topics (e.g., physics, mathematics, machine learning, etc.) is unusually enhanced by following the history of their development. Especially in mathematical topics, where the drive for elegant proofs leads to presentations that strip away the messy history of all the cognitive efforts that went into solving the problem in the first place.
I suppose this is really just an unconventional application of the general principle of learning from history.
I think there’s a lot of the intuitions and thought processes that let you come up with new discoveries in mathematics and machine learning that aren’t generally taught in classes or covered in textbooks. People are also quite bad at conveying their intuitions behind topics directly when asked to in Q&As and speeches. I think that at least in machine learning, hanging out with good ML researchers teaches me a lot about how to think about problems, in a way that I haven’t been able to get even after reading their course notes and listening to their presentations. Similarly, I suspect that autobiographies may help convey the experience of solving problems in a way that actually lets you learn the intuitions or thought processes used by the author.
Some of those are definitely stretching. =P
#10 is extremely thought-provoking, I wonder how much lost intuition is buried in “flavor of the month” scientific fields and approaches of history. Do you have examples of special features of Feynman’s and Watson’s (say) approaches?
Yeah, I agree on the stretching point.
The main distinguishing thing about Feynman, at least from reading Feynman’s two autobiographies, seemed to be how irreverent he is. He doesn’t do science because it’s super important, he does science he finds fun or interesting. He is constantly going on rants about the default way of looking at things (at least his inner monologue is) and ignoring authority, whether by blowing up at the science textbooks he was asked to read, ignoring how presidential committees traditionally functioned, or disagreeing with doctors. He goes to strip clubs because he likes interacting with pretty girls. It’s really quite different from the rather stodgy utilitarian/outside mindset I tend to reference by default, and I think reading his autobiographies me a lot more of what Critch calls “Entitlement to believe” .
When I adopt this “Feynman mindset” in my head, this feels like letting my inner child out. I feel like I can just go and look at things and form hypotheses and ask questions, irrespective of what other people think. I abandon the feeling that I need to do what is immediately important, and instead go look at what I find interesting and fun.
From Watson’s autobiography, I mainly got a sense of how even great scientists are drive a lot by petty desires, such as the fear that someone else would beat them to a discovery, or how annoying your collaborators are. For example, it seemed that a driving factor for Watson and Crick’s drive to work on DNA was the fear that Linus Pauling would discover the true structure first. A lot of their failure to collaborate better with Rosalind Franklin was due to personal clashes with her. Of course, Watson does also display some irreverence to authority; he held fast to his belief that their approach to finding the structure of DNA would work, even when multiple more senior scientists disagreed with him. But I think the main thing I got out of the book was a visceral appreciation for how important social situations are for motivating even important science.
When I adopt this “Watson mindset” in my head, I think about the social situation I’m in, and use that to motivate me. I call upon the irritation I feel when people are just acting a little too suboptimal, or that people are doing things for the wrong reasons. I see how absolutely easy many of the problems I’m working on are, and use my irritation at people having thus failed to solve them to push me to work harder. This probably isn’t a very healthy mindset to have in the long term, and there are obvious problems with it, but it feels very effective to get me to push past schleps.
“Prefer a few large, systematic decisions to many small ones.”
Pick what percentage of your portfolio you want in various assets, and rebalance quarterly, rather than making regular buying/selling decisions
Prioritize once a week, and by default do whatever’s next on the list when you complete a task.
Set up recurring hangouts with friends at whatever frequency you enjoy (e.g. weekly). Cancel or reschedule on an ad-hoc basis, rather than scheduling ad-hoc
Rigorously decide how you will judge the results of experiments, then run a lot of them cheaply. Machine Learning example: pick one evaluation metric (might be a composite of several sub-metrics and rules), then automatically run lots of different models and do a deeper dive into the 5 that perform particularly well
Make a packing checklist for trips, and use it repeatedly
Figure out what criteria would make you leave your current job, and only take interviews that plausibly meet those criteria
Pick a routine for your commute, e.g. listening to podcasts. Test new ideas at the routine level (e.g. podcasts vs books)
Find a specific method for deciding what to eat—for me, this is querying system 1 to ask how I would feel after eating certain foods, and picking the one that returns the best answer
Accepting every time a coworker asks for a game of ping-pong, as a way to get exercise, unless I am about to enter a meeting
Always suggesting the same small set of places for coffee or lunch meetings
Favorite technique: think for 5 minutes by the clock (I don’t always use the clock).
Form a better question than the one you were going to ask. I find the asking of quality questions to generally be more productive than the getting of quality answers.
Divert the impulse for distraction into 5 minutes on a related subject to the one you are working on. This is my second-most common use, and I find it pays frequent dividends in providing perspective and enlivening the task at hand.
Occupy downtime, particularly when prohibited from reading for some reason.
Distract yourself on purpose when afflicted with some bad experience: illness; anxiety; sadness.
During a conversation, apply to the person speaking. This is similar to active listening.
Do a separate 5 minutes on building a visceral experience of the problem/solution rather than just determining the answer. I find this to be very helpful in making sure I notice when the conclusion I came to previously is relevant. Also useful in planning for presentations or other high-stakes activities.
Shut myself up for five minutes. Even a fool who keeps his peace is counted as wise, when he wisely grows less foolish at the same time.
Think about how many 5 minute increments of original thought went into a specific discovery or presentation of the discovery. This is similar to the visceral experience use above, but builds the intuition for one meta-level up.
Iterate through the possible branches of history. I find this necessary from time to time to keep my appreciation for how difficult prediction actually is.
Identify ten unaccustomed uses of your favorite rationality technique.
The current CFAR workshop has a few places where participants explicitly Nail (Hamming Questions and Resolve Cycles, off the top of my head), and sometimes has a place where participants are at least exposed to the concept of Hammering (which we call Overlearning).
I was once told by an older graduate student to explicitly keep two lists: a list of problems and a list of techniques. Then, anytime I hear about a new problem, I add it to the problem list and check it against my technique list, and anytime I hear about a new technique, I add it to the technique list and check it against my problem list. I never did it (in mathematics) but it does seem like a sensible idea (in mathematics).
I’d like to do a version of this in rationality, but I find that my bugs lists decay rapidly; after a period of as little as a few days my sense of what my real bugs are shifts and I have to regenerate the bugs list from scratch or else it feels dead. I don’t keep a technique list because I find explicitly applying techniques to be mostly a chore but there might be a version of that that doesn’t suck for me.
Weird thought, but if your bugs list decays quickly, maybe you’ve not found the most important bugs? In other fields (e.g. mathematics), we continue working on the same problem for years/decades.
It’s not that my bugs change all that much or often per se—I’ve had many of the same bug symptoms—but that I keep changing my sense of what the right frame to describe the bug is.
Sometimes I think the power of all these “keep a list of” techniques really lives in the generalized ability to keep lists.
This feels very true. One could rephrase it as the ability to ensure that what you learn/figure out/discover gets solidfied and built upon. I’m in the process of experimenting with the best way for me to keep in mind the highest leverage bugs in my life.
Okay, so my three core rationality tools are the intellectual turing test, making bets, and fermi estimates / fermi modelling (and also something like ‘get curious’ but I’ve not made my thinking there totally explicit yet).
Let’s go with making bets (aka ‘the tool that helps you update on evidence’):
Make more multi-year bets
Similarly, outline the core assumptions that my long term plans are based on, and then offer people public bets on them
This feels similar to me to Elon Musk’s payment plan at Tesla, where he gets paid only if he hits ambitious targets
Find some way to massively reduce the overhead cost of making bets—I’ve made dozens of bets where I forgot to follow up
Make a bet per day
Post my odds on things regularly (daily), and design an interface where people can claim bets against me
Find some low-effort way to then get the important stats about my calibration (e.g. brier score, brier score over different time periods)
I feel that many of the times I explicitly make bets, it gets in the way of doing good model sharing to find where exactly we should make a bet—or whether we can in fact just converge before making the bet (that would most test the cruxes of our disagreement).
Maybe bets should be the norm for the end of a conversation, not the beginning or middle.
e.g. “Okay, let’s set a five minute timer to figure out our odds on X, then we’ll make the bet and move on to discussing something else”
Have betting parties where people do this lots and lots of times
Have a normalised amount of money you bet (e.g. five dollars)
Another issue is that many of my bets take ages for us to agree on an operationalisation. This is the point, but maybe I would improve on some skills if I practiced making bets in a domain where the operationalisation was easy (e.g. economics, stocks, sports).
Making bets is one of those things that sounds good in principle but I haven’t gotten around to doing (sort of like this list 10 applications exercise). I feel like my time horizons are simply too short right now to keep track of multi-year bets, which seem to be all the interesting ones that people want to take. Any tips?
You can play betting games where the bets resolve instantly because you can look up the answers, e.g. trading a contract with a friend worth the base-10 logarithm of the mass of the sun, in dollars. Basically competitive Fermi estimates. You can do this any time you want to look something up, right before looking it up.
Find some small area of your life where you can keep making bets with a friend. For example, if you go to the shops once a week with a partner, you can bet each week on whether a particular product will be sold out, or whether a certain product will have a reduced price. Two guys who live in my house play Smash Bros regularly, and they make bets about how games will go when e.g. a new friend plays with them.
What do you do regularly with friends that you could make bets about?
I get a lot of mileage out of using Rationalist Taboo, or out of thinking about concepts rather than about words.
All of the following hot-button questions are very easily solved using this technique. As Scott Alexander points out, you can get a reputation as a daring and original thinker just by using this one thing over and over again, one of the best Hammers in the rationality community.
What is the meaning of life?
Is Islam a religion of peace?
Is America a Christian nation?
Is Abortion murder?
But do I really love him?
Does the Constitution create a wall of separation between Church and State?
Is this what I should do?
Are transgenders really women, or men?
Was that a lie?
What is a sandwich?
This is one of my favorites too.
Taking examples and simple tools from the healthiest research fields (maths, physics, cs) is really great, and the exercise you gave at the end was excellent and produced some awesome comments. For these reasons, I’ve curated the post.
When it comes to the idea of Hammers and Nails it’s useful to keep in mind is that you will get a hit when you use a hammer on a nail that nobody else tried to use on that particular nail before.
The more uncommon your hammer happens to be the more likely it is that there weren’t hundreds of other people before you who tried to use it on the nail.
Feynman also explicitly spoke about hammer-mode “[so] I had got a great reputation for doing integrals, only because my box of tools was different from everybody else’s, and they had tried all their tools on it before giving the problem to me”, also some exerpts here https://www.farnamstreetblog.com/2016/07/mental-tools-richard-feynman/
Favorite technique: Argue with yourself about your conclusions.
By which I mean if I have any reasonable doubt about some idea, belief, or plan I split my mind into two debaters who take opposite sides of the issue, each of which wants to win and I use my natural competitiveness to drive insight into an issue.
I think the accustomed use of this would be investigating my deeply held beliefs and trying to get to their real weak points, but it is also useful for:
Examining my favored explanation of a set of data
Figuring out whether I need to change the way I’m presenting a set of data after I have already sunk costs into making the visualizations
Understanding my partner’s perspective after an argument.
Preparing for expected real life arguments
Force myself to understand an issue better, even when I don’t expect I will change my mind about it.
Questioning whether the way I acted in a situation was acceptable.
An exercise in analytic thinking.
Evaluating two plausible arguments I’ve heard but don’t have any particularly strong feelings on
Deciding whether to make a purchase
Comparing two alternative plans
I think murphyjitsu is my favorite technique.
sometimes failing lets you approach a problem from a different angle
humor often results from failure, so anticipating how you’ll fail and nudging to make it more probable might create more humor
murphyjitsu is normally used in making plans, but you can murphyjitsu your opponent’s plans to identify the easiest ways to break them
micro-murphyjitsu is the art of constantly simulating reality like 5 seconds before, sort like like overclocking your OODA loop
murphyjitsu is a fast way to tell if your plan is good or not—you don’t always have to make it better
you can get intuitive probabilities for various things by checking how surprised you are at those things
if you imagine your plan succeeding instead of failing, then it might cause you realize some low-probability high-impact actions to take
you can murphyjitsu plans that you might make to get a sense of the tractability of various goals
murphyjitsu might help correct for overconfidence if you imagine ways you could be wrong every time you make a prediction
Can murphyjitsu things that aren’t plans. E.g. you can suppose the existence of arguments that would change your mind.
Going through Hammertime for the second time now. I tried to figure out 10 not too usual ways in which to utilize predictions and forecasting. Not perfectly happy with the list of course, but a few of these ideas do seem (and to my experience actually are; 1 and 2 in particular) quite useful.
Predicting own future actions to calibrate on one’s own behavior
When setting goals, using predictions on the probability of achieving them by a certain date, giving oneself pointers which goals/plans need more refinement
Predicting the same relatively long term things independently at different points in time (without seeing earlier predictions), figuring out how much noise one’s predictions entail (by then comparing many predictions of the same thing, which, if no major updates shifted the odds, should stay about the same (or consistently go up/down slightly over time due to getting closer to the deadline))
Predicting how other people react to concrete events or news, deliberately updating one’s mental model of them in the process
Teaming up with others, meta-predicting whether a particular set of predictions by the other person(s) will end up being about right, over- or underconfident
When buying groceries (or whatever), before you’re done, make a quick intuitive prediction of what the total price will be
When buying fresh fruit or vegetables or anything else that’s likely to spoil, make predictions how likely it is you’re going to eat each of these items in time
Frequently, e.g. yearly, make predictions about your life circumstances in x years, and evaluate them in the future to figure out whether you tend to over- or underestimate how much your life changes over time
Before doing something aversive, make a concrete prediction of how (negative) you’ll experience it, to figure out whether your aversions tend to be overblown
Experiment with intuitive “5 second predictions” vs those supported by more complex models (Fermi estimate, guesstimate etc.) and figure out which level of effort (in any given domain) works best for you; or to frame it differently, figure out whether your system 1 or system 2 is the better forecaster, and maybe look for ways in which both can work together productively
Ten uses for goal factoring that I personally would not normally consider:
1. When I’m craving a certain food or just hungry in general, I could break it up into flavors, textures, and nutrients I desire and cook up something new that fits exactly what I want
2. For shopping lists
3. For choosing what items to keep or discard when decluttering
4. Create a business plan by having some target users goal factor their problem
5. Goal factor my relationship with someone I’m close to and have them do the same, then share the results with each other.
6. Political cause prioritization.
7. Goal factor X so I can write a poem about X
8. To make my dating website profiles more honest
9. To choose which friends to hang out with more
10. To see whether I truly understand my friend’s hard situation, I could put myself in their shoes and imagine them going through the goal factoring process for the hard problem they’re dealing with. After getting what I think are their motivations, tell them.
Ok… I’m not sure if it can be counted as “instrumental technique”, but I often think in terms of Kahneman’s System 1/ & System 2/.
The exercise … was hard, so I included also ideas which are common but can be “rediscovered” this way
Predicting what should be doable with machine learning techniques now or really soon. My model is “System 1” usually means some dedicated neural “hardware” for a task, and it seems narrow AIs are now approximately at the same level of power. The prediction is you can probably train NNs to recognize e.g.: the emotional state of people; sexual orientation of people; aggressiveness; “moods” of places.
Trying to apply my “system 1” instead of training neural network model, or applying some complex statistical computation. E.g. I tried to predict results of ongoing Czech presidential elections this way: take the number of votes for 6 leading candidates in the previous elections. Ask my system 1 how the candidates are similar to current candidates (assuming we have a relatively good hardware for simulating other people). Calculate how the votes would redistribute based on the similarity “guess”
Estimating what people’s intuitive moral feelings would be and where they are inconsistent with some more formal “system 2” model based on the guess such feelings are “system 1″ heuristic computation.
Trying to improve one’s “system 1” processing speed in useful domains, e.g. reading text faster.
Training system 1 to pattern match some class of situations and do something… well, that’s TAPs :)
Training system 1 to do some routine simple but useful calculation
Trying to extract some “lower level” data from System 1 computations than what is usually presented. E.g. people seem to have some “sense of status”, but they usually apply it indirectly.
Applying ideas from “adversarial machine learning” to think about inputs able to “hack” some system 1 processing common among people, and observing whether its actually done by someone
Training system 1 to get simple numerical estimates of body parameters such as temperature or heart rate
Using system 2 to convert problems to isomorphic problems which are recognized & can be processed by system 1 hardware … well, has been done and works
Hammer: when there’s low downside, you’re free to try things. (Yeah, this is a corollary of expected utility maximization that seems obvious, but I still feel like I needed to explicitly and recently learn it.) Ten examples:
Spend a few hours on a last-minute scholarship application.
Try out dating apps a little (no luck yet, still looking into more effective use. But I still say that trying it was a good choice.)
Call friends/parents when feeling sad.
Go to an Effective Altruism retreat for a weekend.
Be (more) honest with friends.
Be extra friendly in general.
Show more gratitude (inspired by “More Dakka”, which I read thanks to the links at the top of this post.
Spend a few minutes writing a response to this post so that I can get practice with the power of internalizing ideas.
When headache → Advil and hot shower. It just works. Why did I keep just waiting and hoping the headache would go away on its own? Takes a few seconds to get some Advil, and I was going to shower anyways. It’s a huge boost to my well-being and productivity with next to no cost.
Ask questions. It seriously seems like I ask >50% of the questions in whatever room I’m in, and people have thanked me for this. They were ashamed or embarrassed to ask questions or something? What’s the downside?
My favourite trick is “noticing when I am not actually upset/angry/tired with someone or something”. I started doing it before I learned about LW—back then I called it “don’t fall down before you’re hit” in my head. For example, I come to visit a friend who has a young child, and have to sit outside for half an hour before she picks her phone—but the weather is fine, and I notice I’m not actually annoyed by having to wait.