Yes I think I have different intuitions than Taleb here. When you think about Risk in terms of the strategies you use to deal with it, it doesn’t make sense to use for instance anti-fragility to deal with drunk driving on a personal level. It might make sense to use anti-fragility in general for risks of death, but the inputs for your anti-fragile decision should basically take the statistics for drunk driving at face value. I think it’s pretty similar to a lottery ticket in that 99% of the risk is transparent, and a remaining small amount is model uncertainty due to unknown unknowns (maybe someone will rig lottery) .The lucidic fallacy in that sense applies to every risk, because there’s always some small amount of model uncertainty (maybe a malicious demon is confusing me).
One way to think about this is that your base risk is transparent and your model uncertainty is Knightian—this is a sensible way to approach all transparent risks, and it’s part of the justification for the barbell strategy.
How my own driving skill differs from the average person feels to me a straightforward known unknown. For rice prices there’s the known unknown whether and resulting global crop yield.
For a business that sells crops it’s reasonable to buy options to protect against risk that come from the uncertainty about future prices.
> How my own driving skill differs from the average person feels to me a straightforward known unknown.
I didn’t think of model where this mattered. I was more thinking of a model like “number of mistakes goes up linearly with alcohol consumption” than “number of mistakes gets multiplied by alcohol consumption”. If the latter than this becomes an opaque risk (that can be measured by measuring your number of mistakes in a given time period).
> For a business that sells crops it’s reasonable to buy options to protect against risk that come from the uncertainty about future prices.
Agreed. It also seems reasonable when selecting what commodity to sell to do a straight up expected value calculation based on historical data, and choose the one that has the the highest expected value. When thinking about it, perhaps there’s “semi-transparent risks” that are not that dynamic or adversarial but do have black swans, and that should be it’s own category above transparent risks, under which commodities and utilities would go. However, I think the better way to handle this is to treat the chance of black swan as model uncertainty that has knightian risk, and otherwise treat the investment as transparent based on historical data.
Yes I think I have different intuitions than Taleb here. When you think about Risk in terms of the strategies you use to deal with it, it doesn’t make sense to use for instance anti-fragility to deal with drunk driving on a personal level. It might make sense to use anti-fragility in general for risks of death, but the inputs for your anti-fragile decision should basically take the statistics for drunk driving at face value. I think it’s pretty similar to a lottery ticket in that 99% of the risk is transparent, and a remaining small amount is model uncertainty due to unknown unknowns (maybe someone will rig lottery) .The lucidic fallacy in that sense applies to every risk, because there’s always some small amount of model uncertainty (maybe a malicious demon is confusing me).
One way to think about this is that your base risk is transparent and your model uncertainty is Knightian—this is a sensible way to approach all transparent risks, and it’s part of the justification for the barbell strategy.
How my own driving skill differs from the average person feels to me a straightforward known unknown. For rice prices there’s the known unknown whether and resulting global crop yield.
For a business that sells crops it’s reasonable to buy options to protect against risk that come from the uncertainty about future prices.
> How my own driving skill differs from the average person feels to me a straightforward known unknown.
I didn’t think of model where this mattered. I was more thinking of a model like “number of mistakes goes up linearly with alcohol consumption” than “number of mistakes gets multiplied by alcohol consumption”. If the latter than this becomes an opaque risk (that can be measured by measuring your number of mistakes in a given time period).
> For a business that sells crops it’s reasonable to buy options to protect against risk that come from the uncertainty about future prices.
Agreed. It also seems reasonable when selecting what commodity to sell to do a straight up expected value calculation based on historical data, and choose the one that has the the highest expected value. When thinking about it, perhaps there’s “semi-transparent risks” that are not that dynamic or adversarial but do have black swans, and that should be it’s own category above transparent risks, under which commodities and utilities would go. However, I think the better way to handle this is to treat the chance of black swan as model uncertainty that has knightian risk, and otherwise treat the investment as transparent based on historical data.
After having someone else on the EA forum also point me to the data on commodities, I’m now updating the post.