In my work, there are a number of rationality techniques that I have learned that have … contributed to me spending less time confused, and getting to the right result more quickly than I otherwise would have.
Could you please tell us the specific techniques and/or situations? (I’m sorry to keep asking this of everyone, but the answers are really interesting/useful. We need to figure out what different peoples’ practice actually looks like, and what mileage people do and don’t get from it. In detail.)
[Sorry for the slow response. Have been away for the weekend.]
No need to apologize, it’s an excellent question. And to be honest, because my work involves a lot of data analysis, and using such analysis to inform decision-making, I may be cheating somewhat here. There are times when remembering that “probability is in the mind” has stopped me getting confused and helped me reach the right answer more quickly, but they’re probably not particularly generalizable. ;)
Here’s a quick list of some techniques that have helped that might be more generally applicable. They’re not necessarily techniques that I always manage to apply consistently, but I’m working on it, and when I do, they seem to make a difference.
(Listing them like this actually makes them seem pretty trivial; I’ll leave others to decide whether they really warrant the imprimatur of “rationality techniques”.)
(1) Avoiding confirmation bias in program testing: I’m not a great programmer by any stretch of the imagination, but it is something I have to do a fair amount of. Almost every time I write a moderately complicated program, I have to fight the urge to believe that this time I’ve got it basically right on the first go, to throw a few basic tests at it, and get on with using it as soon as possible, without really testing it properly. The times I haven’t managed to fight this urge have almost always resulted in much more time wasted down the line than taking a little more time at the outset to test properly.
(2) Leaving a line of retreat. Getting myself too attached to particular hypotheses has also wasted a fair amount of my time. In particular, there’s always a temptation, when data happens not to fit your preconceived ideas, to keep trying slightly different analyses to see whether they’ll give you the answer you expected. This can sometimes be reasonable, but if you’re not careful, can lead to wasting an enormous amount of time chasing something that’s ultimately a dead end. I think that forcing myself to reassess hypotheses sooner rather than later has helped to cut down on that sort of dead end analysis.
(3) Realizing that some decisions don’t matter (aka not being Buridan’s ass): I’m something of a perfectionist, and have a tendency to want every decision to be optimal. In any sort of analysis, you have to make numerous, more or less arbitrary choices about exactly how to proceed. Some of these choices appear difficult because the alternatives are finely balanced; so you keep searching for some factor that could make the crucial difference between them. But sweating every decision like this (as I used to do) can kill a lot of time for very little reward (especially, though not only when the stakes are small.)
But to be honest, the biggest time-saver I’ve encountered is taking the outside view to avoid the planner’s fallacy. Over the years, I’ve taken on a number of projects that I would not have taken on, had I realized at the the outset how much time they would actually take. Usually, these have both taken up time that could better have been spent elsewhere, and have created a great deal of unnecessary stress. The temptation to take the inside view, and to be overly optimistic in time estimates is something I always have to consciously fight (and that, per Hofstadter’s law, I’ve never managed to fully overcome), but is something I’ve become much better at.
Z_M_Davis’ recent post on the sunk cost fallacy, reminded me that being willing to give up unproductive projects can also be a time-saver, although the issues here are somewhat more complicated for reasons some have mentioned in the comments (e.g. the reverse sunk cost fallacy, and reputational costs involved in abandoning projects).
Could you please tell us the specific techniques and/or situations? (I’m sorry to keep asking this of everyone, but the answers are really interesting/useful. We need to figure out what different peoples’ practice actually looks like, and what mileage people do and don’t get from it. In detail.)
[Sorry for the slow response. Have been away for the weekend.]
No need to apologize, it’s an excellent question. And to be honest, because my work involves a lot of data analysis, and using such analysis to inform decision-making, I may be cheating somewhat here. There are times when remembering that “probability is in the mind” has stopped me getting confused and helped me reach the right answer more quickly, but they’re probably not particularly generalizable. ;)
Here’s a quick list of some techniques that have helped that might be more generally applicable. They’re not necessarily techniques that I always manage to apply consistently, but I’m working on it, and when I do, they seem to make a difference.
(Listing them like this actually makes them seem pretty trivial; I’ll leave others to decide whether they really warrant the imprimatur of “rationality techniques”.)
(1) Avoiding confirmation bias in program testing: I’m not a great programmer by any stretch of the imagination, but it is something I have to do a fair amount of. Almost every time I write a moderately complicated program, I have to fight the urge to believe that this time I’ve got it basically right on the first go, to throw a few basic tests at it, and get on with using it as soon as possible, without really testing it properly. The times I haven’t managed to fight this urge have almost always resulted in much more time wasted down the line than taking a little more time at the outset to test properly.
(2) Leaving a line of retreat. Getting myself too attached to particular hypotheses has also wasted a fair amount of my time. In particular, there’s always a temptation, when data happens not to fit your preconceived ideas, to keep trying slightly different analyses to see whether they’ll give you the answer you expected. This can sometimes be reasonable, but if you’re not careful, can lead to wasting an enormous amount of time chasing something that’s ultimately a dead end. I think that forcing myself to reassess hypotheses sooner rather than later has helped to cut down on that sort of dead end analysis.
(3) Realizing that some decisions don’t matter (aka not being Buridan’s ass): I’m something of a perfectionist, and have a tendency to want every decision to be optimal. In any sort of analysis, you have to make numerous, more or less arbitrary choices about exactly how to proceed. Some of these choices appear difficult because the alternatives are finely balanced; so you keep searching for some factor that could make the crucial difference between them. But sweating every decision like this (as I used to do) can kill a lot of time for very little reward (especially, though not only when the stakes are small.)
But to be honest, the biggest time-saver I’ve encountered is taking the outside view to avoid the planner’s fallacy. Over the years, I’ve taken on a number of projects that I would not have taken on, had I realized at the the outset how much time they would actually take. Usually, these have both taken up time that could better have been spent elsewhere, and have created a great deal of unnecessary stress. The temptation to take the inside view, and to be overly optimistic in time estimates is something I always have to consciously fight (and that, per Hofstadter’s law, I’ve never managed to fully overcome), but is something I’ve become much better at.
Z_M_Davis’ recent post on the sunk cost fallacy, reminded me that being willing to give up unproductive projects can also be a time-saver, although the issues here are somewhat more complicated for reasons some have mentioned in the comments (e.g. the reverse sunk cost fallacy, and reputational costs involved in abandoning projects).