More concretely than Lumifer’s answer, it would encourage you to diversify your plans, and try not to rely on leveraging any one model or enterprise. It also encourages you to play odds instead of playing it safe, because safe is rarely as safe as you think it is. Try new things regularly, since cost of doing them is generally linear while pay-off could easily be exponential.
I’m not actually sure the concept can do all that work, mostly because we don’t have plausible theories for making decisions from imprecise probabilities (with probability we have expected utility maximization). See e.g. this very readable paper.
The only point of probabilities is to have them guide actions.
I don’t agree with that (a quick example is that speculating about the Big Bang is entirely pointless under this approach), but that’s a separate discussion.
How does the concept of Knightian uncertainty help in guiding actions?
It allows you to not invent fake probabilities and suffer from believing you have a handle on something when in reality you don’t.
OK, I’ll give you that we might non-instrumentally value the accuracy of our beliefs (even so, I don’t know how unpack ‘accuracy’ in a way that can handle both probabilities and uncertainty, but I agree this is another discussion). I still suspect that the concept of uncertainty doesn’t help with instrumental rationality, bracketing the supposed immorality of assigning probabilities from sparse information. (Recall that you claimed Knightian uncertainty was ‘useful’.)
The only point of probabilities is to have them guide actions. How does the concept of Knightian uncertainty help in guiding actions?
More concretely than Lumifer’s answer, it would encourage you to diversify your plans, and try not to rely on leveraging any one model or enterprise. It also encourages you to play odds instead of playing it safe, because safe is rarely as safe as you think it is. Try new things regularly, since cost of doing them is generally linear while pay-off could easily be exponential.
That’s what I got out of it, anyways.
I’m not actually sure the concept can do all that work, mostly because we don’t have plausible theories for making decisions from imprecise probabilities (with probability we have expected utility maximization). See e.g. this very readable paper.
I don’t agree with that (a quick example is that speculating about the Big Bang is entirely pointless under this approach), but that’s a separate discussion.
It allows you to not invent fake probabilities and suffer from believing you have a handle on something when in reality you don’t.
Such speculation may help guide actions regarding future investments in telescopes, decisions on whether to try to look for aliens, etc.
OK, I’ll give you that we might non-instrumentally value the accuracy of our beliefs (even so, I don’t know how unpack ‘accuracy’ in a way that can handle both probabilities and uncertainty, but I agree this is another discussion). I still suspect that the concept of uncertainty doesn’t help with instrumental rationality, bracketing the supposed immorality of assigning probabilities from sparse information. (Recall that you claimed Knightian uncertainty was ‘useful’.)