I feel like the simple Kahneman algorithms are amazing. Based on what I read in the Harvard Business Review article this isn’t six to eight complex variables; this is more like six cells in a spreadsheet. This has several implications:
Cheap: algorithms should be considered superior to expert opinion because they perform similarly for a fraction of the price.
Fast: spreadsheet calculations are very, very fast relative to an expert review process. Decision speed is a very common bottleneck in organizations and complex tasks; having more time is like an accumulated general advantage in the same way as having more money.
Simple: the low number of variables makes the options for changes to make clear, and it is easy to tell the difference between two versions of the algorithm.
Testable: being cheap, fast, and simple makes them ideal candidates for testing. It is easy to run multiple versions of an algorithm side by side, for almost no more resources than it takes to run one version.
Bootstrapping: because it is easy to test them, this effectively lowers the threshold of expertise required to identify the variables in the first place. Instead literature reviews no more intensive than the kind we do here would suffice to identify candidates for variables, and then testing can sort the most effective ones.
Even in the case where such an algorithm is exceeded by expertise, these factors make it easy to make the algorithm ubiquitous which implies we can use them to set a new floor on the goodness of decisions in the relevant domain. That really seems like raising the sanity waterline.
Decisions: fast, cheap, good. Sometimes we can have all three.
I feel like the simple Kahneman algorithms are amazing. Based on what I read in the Harvard Business Review article this isn’t six to eight complex variables; this is more like six cells in a spreadsheet. This has several implications:
Cheap: algorithms should be considered superior to expert opinion because they perform similarly for a fraction of the price.
Fast: spreadsheet calculations are very, very fast relative to an expert review process. Decision speed is a very common bottleneck in organizations and complex tasks; having more time is like an accumulated general advantage in the same way as having more money.
Simple: the low number of variables makes the options for changes to make clear, and it is easy to tell the difference between two versions of the algorithm.
Testable: being cheap, fast, and simple makes them ideal candidates for testing. It is easy to run multiple versions of an algorithm side by side, for almost no more resources than it takes to run one version.
Bootstrapping: because it is easy to test them, this effectively lowers the threshold of expertise required to identify the variables in the first place. Instead literature reviews no more intensive than the kind we do here would suffice to identify candidates for variables, and then testing can sort the most effective ones.
Even in the case where such an algorithm is exceeded by expertise, these factors make it easy to make the algorithm ubiquitous which implies we can use them to set a new floor on the goodness of decisions in the relevant domain. That really seems like raising the sanity waterline.
Decisions: fast, cheap, good. Sometimes we can have all three.