you can have meta uncertainty about WHICH type of environment you’re in, which changes what strategies you should be using to mitigate the risk associated with the uncertainty.
While I agree that it’s helpful to recognize situations where it’s useful to play more defensively than normal, I don’t think “meta uncertainty” (or “Knightian uncertainty”, as it’s more typically called) is a good concept to use when doing so. This is because there is fundamentally no such thing as Knightian uncertainty; any purported examples of “Knightian uncertainty” can actually be represented just fine in the standard Bayesian expected utility framework in one of two ways: (1) by modifying your prior, or (2) by modifying your assignment of utilities.
I don’t think it’s helpful to assign a separate label to something that is, in fact, not a separate thing. Although humans do exhibit ambiguity aversion in a number of scenarios, ambiguity aversion is a bias, and we shouldn’t be attempting to justify biased/irrational behavior by introducing additional concepts that are otherwise unnecessary. Nate Soares wrote a mini-sequence addressing this idea several years ago, and I really wish more people had read it (although if memory serves, it was posted during the decline of LW1.0, which may explain the lack of familiarity).
I seriously recommend anyone unfamiliar with the sequence to give it a read; it’s not long, and it’s exceptionally well-written. I already linked three of the posts above, so here’s the last one.
I specifically define knightian uncertainty (which is seperated from my use of meta-uncertainty) in the linked post, as referring to specific strategic scenarios where naive STRATEGIES of making decisions with expected value fail, for a number of reasons (the distribution is changing too fast, the environment is adversarial, etc).
This is different from the typical definition in that it’s not implying that you can’t measure the uncertainty—the Bayesian epistimology still applies. Rather, it’s claiming that there are other strategies of risk mitigation you should use seperated from your measurement of uncertainty, simply implied by the environment. This is I think what proponents of knightian uncertainty are actually talking about, and it’s not at odds with Bayesianism.
While I agree that it’s helpful to recognize situations where it’s useful to play more defensively than normal, I don’t think “meta uncertainty” (or “Knightian uncertainty”, as it’s more typically called) is a good concept to use when doing so. This is because there is fundamentally no such thing as Knightian uncertainty; any purported examples of “Knightian uncertainty” can actually be represented just fine in the standard Bayesian expected utility framework in one of two ways: (1) by modifying your prior, or (2) by modifying your assignment of utilities.
I don’t think it’s helpful to assign a separate label to something that is, in fact, not a separate thing. Although humans do exhibit ambiguity aversion in a number of scenarios, ambiguity aversion is a bias, and we shouldn’t be attempting to justify biased/irrational behavior by introducing additional concepts that are otherwise unnecessary. Nate Soares wrote a mini-sequence addressing this idea several years ago, and I really wish more people had read it (although if memory serves, it was posted during the decline of LW1.0, which may explain the lack of familiarity).
I seriously recommend anyone unfamiliar with the sequence to give it a read; it’s not long, and it’s exceptionally well-written. I already linked three of the posts above, so here’s the last one.
I specifically define knightian uncertainty (which is seperated from my use of meta-uncertainty) in the linked post, as referring to specific strategic scenarios where naive STRATEGIES of making decisions with expected value fail, for a number of reasons (the distribution is changing too fast, the environment is adversarial, etc).
This is different from the typical definition in that it’s not implying that you can’t measure the uncertainty—the Bayesian epistimology still applies. Rather, it’s claiming that there are other strategies of risk mitigation you should use seperated from your measurement of uncertainty, simply implied by the environment. This is I think what proponents of knightian uncertainty are actually talking about, and it’s not at odds with Bayesianism.