I’m curious what others thoughts are on Black Swan Theory/Knightian Uncertainty vs. pure Bayesian Reasoning. Do you think there are things in which bayesian prediction will tend to do more harm than good?
BrienneStrohl posted something on her facebook that said she thought the phrase “Knightian Uncertainity” had negative information value, and an interesting conversation ensued between Brienne, myself, Eliezer,Kevin Carlson, and a few others. It’s of particular interest to me because Black Swan Theory is so central to how I view the world. If it turns out I’ve been assuming unpredictability when I should be trusting my predictions, I’ll have to reevaluate some of the choices I’ve made.
I think Knightian uncertainty is a very useful concept. Sometimes “I don’t know” is the right answer. I can’t estimate the probabilities, I have no evidence, no decent priors—I just do not know. It’s much better to accept that than to start inventing fictional probabilities.
Black Swan isn’t a theory, it’s basically a correct observation that statistical models of the world are limited in many important ways and depend on many implicit and explicit assumptions (a typical assumption is the stability of the underlying process). When an assumption turns out to be wrong the model breaks, sometimes in a spectacular way.
Nassim Taleb tried to make a philosophy out of that observation. I am not particularly impressed by it.
The trouble, of course, is that “I don’t know” is not an action. If “I don’t know means” “don’t deviate from the status quo,” that can be a bad plan if the status quo is bad.
More concretely than Lumifer’s answer, it would encourage you to diversify your plans, and try not to rely on leveraging any one model or enterprise. It also encourages you to play odds instead of playing it safe, because safe is rarely as safe as you think it is. Try new things regularly, since cost of doing them is generally linear while pay-off could easily be exponential.
I’m not actually sure the concept can do all that work, mostly because we don’t have plausible theories for making decisions from imprecise probabilities (with probability we have expected utility maximization). See e.g. this very readable paper.
The only point of probabilities is to have them guide actions.
I don’t agree with that (a quick example is that speculating about the Big Bang is entirely pointless under this approach), but that’s a separate discussion.
How does the concept of Knightian uncertainty help in guiding actions?
It allows you to not invent fake probabilities and suffer from believing you have a handle on something when in reality you don’t.
OK, I’ll give you that we might non-instrumentally value the accuracy of our beliefs (even so, I don’t know how unpack ‘accuracy’ in a way that can handle both probabilities and uncertainty, but I agree this is another discussion). I still suspect that the concept of uncertainty doesn’t help with instrumental rationality, bracketing the supposed immorality of assigning probabilities from sparse information. (Recall that you claimed Knightian uncertainty was ‘useful’.)
When I mention black swan theory, I guess I’m talking more about Talebs thoughts on the consequences of the fact you mentioned above (mostly mentioned in [This Wiki Page[(http://en.wikipedia.org/wiki/Black_swan_theory).
Basic Tenets as I understand them:
Most historical change is driven by black swans which were unpredictable in their nature or magnitude.
IN hindsight, it seems like black swans could have been predicted… but they could not have.
The best way to deal with black swans is to :
3a. Build systems that can handle and in fact gain from disorder.
3b. Invest in systems in such a way that you would gain dispropoartionately from a black swan event.
3c. Avoid investing in systems in such a way that you would lose disproportainately from a black swan event.
I would like to see evidence for (1) which goes beyond “future is uncertain and large-impact events are important”.
(2) is just part of the definition of what a black swan is.
(3a) is Taleb’s idea of antifragility. I am not sure it’s practical. For any system that you can build I can imagine an improbable event which will smash it.
As to (3b) Taleb ran a hedge fund for a while, if I recall correctly. It did badly. Taleb doesn’t like to mention it.
(3c) is just good risk management and again, see (3a). I don’t know what are the practical suggestions beyond diversification. Hedging against disaster (typically by buying volatility or selling short) implies losses if the disaster does not happen.
As to (3b) Taleb ran a hedge fund for a while, if I recall correctly. It did badly. Taleb doesn’t like to mention it.
Wikipedia says:
The investment strategy of the fund has been explained in a New Yorker article.[3][4] One of Empirica’s funds, Empirica Kurtosis LLC, was reported to have made a 60% return in 2000 followed by losses in 2001, 2002, and single digit gains in 2003 and 2004.
Without having the number for 2001 to 2004 it’s hard to say how badly it run.
Universa the hedge fund Taleb is currently advicing seems to run well enough to have $6 billion in assets under management but I can’t easily find numbers of return.
That is to say, I am uncertain what a rival company will do, but I know they will try to achieve a goal. When achieving that goal involves surprising me, I should expect them to surprise me even though I’m using the best model I can to model them.
This actually assumes that the Bayesian model is accurate.
Under Black Swan Theory, you can’t use past correct predictions to predict future correct predictions.
For example, most of the variance in the stock market is distributed over a few days in history. I could have calibrated on every day leading up to one of those days and felt confident in my ability to predict the stock market… but just one of those days could have wiped out my portfolio.
Calibration actually makes you necessarily overconfident in the Black Swan view of the world.
You don’t need to assume that things are normally distributed to be a Bayesian.
People who calibrate themselves usually don’t get more confident through the process but less confident.
I could have calibrated on every day leading up to one of those days and felt confident in my ability to predict the stock market… but just one of those days could have wiped out my portfolio.
I’m not arguing that you can predict the stock market. What you can do is calibrate yourself enough to see that it’s frequently doing things that you didn’t predict.
I’m curious what others thoughts are on Black Swan Theory/Knightian Uncertainty vs. pure Bayesian Reasoning. Do you think there are things in which bayesian prediction will tend to do more harm than good?
BrienneStrohl posted something on her facebook that said she thought the phrase “Knightian Uncertainity” had negative information value, and an interesting conversation ensued between Brienne, myself, Eliezer,Kevin Carlson, and a few others. It’s of particular interest to me because Black Swan Theory is so central to how I view the world. If it turns out I’ve been assuming unpredictability when I should be trusting my predictions, I’ll have to reevaluate some of the choices I’ve made.
Here’s the original conversation for context, if you’re interested: https://www.facebook.com/strohl89/posts/10152237491864598
I think Knightian uncertainty is a very useful concept. Sometimes “I don’t know” is the right answer. I can’t estimate the probabilities, I have no evidence, no decent priors—I just do not know. It’s much better to accept that than to start inventing fictional probabilities.
Black Swan isn’t a theory, it’s basically a correct observation that statistical models of the world are limited in many important ways and depend on many implicit and explicit assumptions (a typical assumption is the stability of the underlying process). When an assumption turns out to be wrong the model breaks, sometimes in a spectacular way.
Nassim Taleb tried to make a philosophy out of that observation. I am not particularly impressed by it.
The trouble, of course, is that “I don’t know” is not an action. If “I don’t know means” “don’t deviate from the status quo,” that can be a bad plan if the status quo is bad.
Yes, and why is this “trouble”?
The only point of probabilities is to have them guide actions. How does the concept of Knightian uncertainty help in guiding actions?
More concretely than Lumifer’s answer, it would encourage you to diversify your plans, and try not to rely on leveraging any one model or enterprise. It also encourages you to play odds instead of playing it safe, because safe is rarely as safe as you think it is. Try new things regularly, since cost of doing them is generally linear while pay-off could easily be exponential.
That’s what I got out of it, anyways.
I’m not actually sure the concept can do all that work, mostly because we don’t have plausible theories for making decisions from imprecise probabilities (with probability we have expected utility maximization). See e.g. this very readable paper.
I don’t agree with that (a quick example is that speculating about the Big Bang is entirely pointless under this approach), but that’s a separate discussion.
It allows you to not invent fake probabilities and suffer from believing you have a handle on something when in reality you don’t.
Such speculation may help guide actions regarding future investments in telescopes, decisions on whether to try to look for aliens, etc.
OK, I’ll give you that we might non-instrumentally value the accuracy of our beliefs (even so, I don’t know how unpack ‘accuracy’ in a way that can handle both probabilities and uncertainty, but I agree this is another discussion). I still suspect that the concept of uncertainty doesn’t help with instrumental rationality, bracketing the supposed immorality of assigning probabilities from sparse information. (Recall that you claimed Knightian uncertainty was ‘useful’.)
When I mention black swan theory, I guess I’m talking more about Talebs thoughts on the consequences of the fact you mentioned above (mostly mentioned in [This Wiki Page[(http://en.wikipedia.org/wiki/Black_swan_theory).
Basic Tenets as I understand them:
Most historical change is driven by black swans which were unpredictable in their nature or magnitude.
IN hindsight, it seems like black swans could have been predicted… but they could not have.
The best way to deal with black swans is to : 3a. Build systems that can handle and in fact gain from disorder. 3b. Invest in systems in such a way that you would gain dispropoartionately from a black swan event. 3c. Avoid investing in systems in such a way that you would lose disproportainately from a black swan event.
I would like to see evidence for (1) which goes beyond “future is uncertain and large-impact events are important”.
(2) is just part of the definition of what a black swan is.
(3a) is Taleb’s idea of antifragility. I am not sure it’s practical. For any system that you can build I can imagine an improbable event which will smash it.
As to (3b) Taleb ran a hedge fund for a while, if I recall correctly. It did badly. Taleb doesn’t like to mention it.
(3c) is just good risk management and again, see (3a). I don’t know what are the practical suggestions beyond diversification. Hedging against disaster (typically by buying volatility or selling short) implies losses if the disaster does not happen.
Have you read the book?
I have read The Black Swan, I have not read Antifragile.
Wikipedia says:
Without having the number for 2001 to 2004 it’s hard to say how badly it run.
Universa the hedge fund Taleb is currently advicing seems to run well enough to have $6 billion in assets under management but I can’t easily find numbers of return.
I think anti-fragility makes sense if you think of it as existing over a range of stressors rather than being an absolute quality.
Scott Aaronson has a concrete example of Knightian uncertainly in his paper The Ghost in the Quantum Turing Machine.
I associate Knightian Uncertainty with Eliezer’s description of Expected Creative Surprises.
That is to say, I am uncertain what a rival company will do, but I know they will try to achieve a goal. When achieving that goal involves surprising me, I should expect them to surprise me even though I’m using the best model I can to model them.
There only one way to find out. Write your predictions down and calibrate yourself.
This actually assumes that the Bayesian model is accurate.
Under Black Swan Theory, you can’t use past correct predictions to predict future correct predictions.
For example, most of the variance in the stock market is distributed over a few days in history. I could have calibrated on every day leading up to one of those days and felt confident in my ability to predict the stock market… but just one of those days could have wiped out my portfolio.
Calibration actually makes you necessarily overconfident in the Black Swan view of the world.
You don’t need to assume that things are normally distributed to be a Bayesian.
People who calibrate themselves usually don’t get more confident through the process but less confident.
Don’t calibrate on a single variable.
But you do need to assume that somehow you can predict novel events based on previous data
Just going back to my stock market example, what variables would I have calibrated on to predict 9/11 and it’s effects on the stock market?
I’m not arguing that you can predict the stock market. What you can do is calibrate yourself enough to see that it’s frequently doing things that you didn’t predict.