One method that I find very useful for evaluating adequacy concerns is “level of play”. If you look at different games or different leagues of the same game, it’s pretty apparent that the “level of play”—the amount of athleticism, effort, skill, planning, strategy, etc. that is on display—is quite different.
For instance, in the United States the NFL is operating at a higher level of play than college football. Similarly, baseball in the United States operates at a higher level of play than baseball in Japan, and either is significantly elevated compared to the rest of the world. In South Korea, Starcraft is treated as a professional sport, and is thus predictably operating at a higher level than Starcraft in the United States. This sort of consideration doesn’t just apply to sports—cryptocurrency trading operated at an obviously much lower level of play than normal finance for a long time (though that may be changing).
A similar concept can be applied to general adequacy—one can quite usefully analyze existing organizations or programs by asking “what’s the level of play here?” There are three basic questions that I think are quite useful for analyzing what level something is on:
Seriousness. Do people take this seriously? There are many areas that nobody really tries very hard at, and those areas usually operate at a low level of play—why bother to do it right if you don’t really care that much? In sports, think about the difference between a niche sport like jai alai and something like soccer.
Competitiveness. Do different groups compete to do better? If so, how close is the competition? There are some fields that are taken very seriously but are nevertheless not very competitive—medicine, for instance—and this tends to yield lower-level outcomes overall, since there is less incentive to integrate new advances into the system.
Aliveness.Are the conclusions being tested in actual practice? If not, are they being tested under realistic training conditions? It may surprise you to learn that there are matters that are taken seriously and competitively and yet not actually tested, but this is the state of the world! The biggest example of this I can think of would, surprisingly enough, be military strategy. History is rifewithexamples of situations where war planners, despite being very serious, well-funded and in direct competition (sometimes to the point of arms races), nevertheless made grevious errors—errors which led to the deaths of thousands. One of the main reasons such drastically wrong decisions can be made is that it is very difficult to actually test various military strategies under realistic conditions, and so false theories easily propagate in the absence of actual wars.
Asking these questions is a great quick way to evaluate “level of play” and hence adequacy. For instance, to take three quick examples of my evaluations:
The level of play in investment banking is quite high. People take it quite seriously, it’s very competitive, and people’s conclusions are tested all the time.
The level of play in Magic: the Gathering is lower than that but still pretty high—people take it quite seriously (as games go), it’s very competitive, and people’s conclusions are tested all the time.
The level of play in asteroid impact avoidance, on the other hand, is not very high. While this is certainly a serious affair in one sense, it isn’t highly funded, it isn’t very competitive, and it’s hard to do tests.
I find this method and framing quick and easy to use—when I frame things in terms of “adequacy” it seems at times a little distanced from the case, but when I frame them in terms of levels of play they end up much easier for me to work with.
Adequacy as Levels of Play
Adequacy as Levels of Play
One method that I find very useful for evaluating adequacy concerns is “level of play”. If you look at different games or different leagues of the same game, it’s pretty apparent that the “level of play”—the amount of athleticism, effort, skill, planning, strategy, etc. that is on display—is quite different.
For instance, in the United States the NFL is operating at a higher level of play than college football. Similarly, baseball in the United States operates at a higher level of play than baseball in Japan, and either is significantly elevated compared to the rest of the world. In South Korea, Starcraft is treated as a professional sport, and is thus predictably operating at a higher level than Starcraft in the United States. This sort of consideration doesn’t just apply to sports—cryptocurrency trading operated at an obviously much lower level of play than normal finance for a long time (though that may be changing).
A similar concept can be applied to general adequacy—one can quite usefully analyze existing organizations or programs by asking “what’s the level of play here?” There are three basic questions that I think are quite useful for analyzing what level something is on:
Seriousness. Do people take this seriously? There are many areas that nobody really tries very hard at, and those areas usually operate at a low level of play—why bother to do it right if you don’t really care that much? In sports, think about the difference between a niche sport like jai alai and something like soccer.
Competitiveness. Do different groups compete to do better? If so, how close is the competition? There are some fields that are taken very seriously but are nevertheless not very competitive—medicine, for instance—and this tends to yield lower-level outcomes overall, since there is less incentive to integrate new advances into the system.
Aliveness. Are the conclusions being tested in actual practice? If not, are they being tested under realistic training conditions? It may surprise you to learn that there are matters that are taken seriously and competitively and yet not actually tested, but this is the state of the world! The biggest example of this I can think of would, surprisingly enough, be military strategy. History is rife with examples of situations where war planners, despite being very serious, well-funded and in direct competition (sometimes to the point of arms races), nevertheless made grevious errors—errors which led to the deaths of thousands. One of the main reasons such drastically wrong decisions can be made is that it is very difficult to actually test various military strategies under realistic conditions, and so false theories easily propagate in the absence of actual wars.
Asking these questions is a great quick way to evaluate “level of play” and hence adequacy. For instance, to take three quick examples of my evaluations:
The level of play in investment banking is quite high. People take it quite seriously, it’s very competitive, and people’s conclusions are tested all the time.
The level of play in Magic: the Gathering is lower than that but still pretty high—people take it quite seriously (as games go), it’s very competitive, and people’s conclusions are tested all the time.
The level of play in asteroid impact avoidance, on the other hand, is not very high. While this is certainly a serious affair in one sense, it isn’t highly funded, it isn’t very competitive, and it’s hard to do tests.
I find this method and framing quick and easy to use—when I frame things in terms of “adequacy” it seems at times a little distanced from the case, but when I frame them in terms of levels of play they end up much easier for me to work with.
Try this out and see if it works for you!