The “model” you never name for the stock price distribution is a normal or Gaussian distribution. You point out that this model fails spectacularly to predict the highest variability point 20 sigma event. What you don’t point out that this model fails to predict even the 5 sigma events.
Looking at the plot of “Daily Changes in the Dow,” we see it is plotted over fewer than 95 years. Each trading year has a little less than 260 trading days in it. So plotted should be at most 24,000 daily changes. For a normal distribution, a 1/24000 event, the biggest event we would expect to happen once on this whole graph, would be somewhere between 4 and 4.5 sigma.
But instead of seeing one, or close to one 4.5 sigma or higher events in 24,000 daily changes, we see about 28 by my rough count looking at the graph. The data starts in 1915. By 1922 we have seen 5 “1 in100 years” events.
My point being: by 1922, we know the model that daily changes in the stock market fit a normal or gaussian distribution is just pure crap. We don’t need to wait 70 years for 20 sigma event to know the model isn’t just wrong, it is stupid. We suspect this fact within the first year or two of data, and we know with high reliability that the model is garbage by 1922.
What I don’t understand is why anybody rational would ever state the data fits a normal distribution, in the trivially obvious event that even within a few hundred events, we see an overwhelmingly obvious excess of large daily changes from what a normal distribution would predict.
One need not go to a mathematically well-described alternative distribution to produce a much better prediction of how often 20 sigma events will actually occur. Simply plotting the histogram of daily changes will show the tail probability falling off much more slowly than a normal distribution predicts. Before the 20 sigma event occurs, one might by estimating from the actual distribution of changes underestimate its probably by a factor of a few (or not, I have not done the exercise on this data), but one will not have underestimated it by tens of orders of magnitude.
There is no “uncertainty” in modeling daily stock market changes as normally (or log-normally) distributed. One can look at just the first few 100 points in the measured data, to be CERTAIN that a normal or a log-normal distribution is simply wrong.
The model used is the Black-Scholes model with, as you point out, a normal distribution. It endures, despite being clearly wrong, because there doesn’t seem to be any good alternatives.
I doesn’t endure, not in Risk Management, anyways. Some alternatives for equities are e.g. the Heston Model or other stochastic volatilities approaches. Then, there is the whole filed of systemic risk which studies correlated crashes: events when a bunch of equities all crash at the same time are way more common than they should be and people are aware of this. See e.g. this anaysis that uses a Hawk model to capture the clustering of the crashes.
B-S endures, but is generally patched with insights like these.
I think I can see what you mean, and in fact I partially agree, so I’ll try to restate the argument. Correct me if you think I got it wrong.
In my experience it’s true that B-S is still used for quick and dirty bulk calculations or by organizations that don’t have the means to implement more complex models. But the model’s shortcomings are very well understood by the industry, and risk managers absolutely don’t rely on this model when e.g. calculating the capital requirement for Basel or Solvency purposes. If they did, the regulators will utterly demolish their internal risk model.
There is still a lot of work to be done, and there is what you call model uncertainty at least when dealing with short time scales, but (fortunately) there’s been a lot of progress since B-S.
It assumes that the underlying model follows a Gaussian distribution but as Mandelbrot showed a Lévy distribution is a better model.
Black-Scholes is a name for a formula that was around before Black and Scholes published. Beforehand it was simply a heuristic used by traders. Those traders also did scale a few parameters around in a way that a normal distribution wouldn’t allow. Black-Scholes then went and proved the formula correct for a Gaussian distribution based on advanced math.
After Black-Scholes got a “nobel prize” people stated to believe that the formula is actually measuring real risk and betting accordingly. Betting like that is benefitial for traders who make bonuses when they win but who don’t suffer that much if they lose all the money they bet. Or a government bails you out when you lose all your money.
The problem with Levy distributions is that they have a parameter c that you can’t simply estimate by having a random sample in the way you can estimate all the parameters of Gaussian distribution if you have a big enough sample.
*I’m no expert on the subject but the above is my understanding from reading Taleb and other reading.
I did read Nassim Taleb description of the situation in “Fooled By Randomness” and “The Black Swan”, so I’m not sure whether you think I misrepresented Taleb or whether you think Talebs arguments are simply wrong.
“No clue” is not a good representation of my knowledge because I did read Taleb as introduction for the topic. That doesn’t make me an expert, but it does give me a opinion on the topic that might be true or false.
The “model” you never name for the stock price distribution is a normal or Gaussian distribution. You point out that this model fails spectacularly to predict the highest variability point 20 sigma event. What you don’t point out that this model fails to predict even the 5 sigma events.
Looking at the plot of “Daily Changes in the Dow,” we see it is plotted over fewer than 95 years. Each trading year has a little less than 260 trading days in it. So plotted should be at most 24,000 daily changes. For a normal distribution, a 1/24000 event, the biggest event we would expect to happen once on this whole graph, would be somewhere between 4 and 4.5 sigma.
But instead of seeing one, or close to one 4.5 sigma or higher events in 24,000 daily changes, we see about 28 by my rough count looking at the graph. The data starts in 1915. By 1922 we have seen 5 “1 in100 years” events.
My point being: by 1922, we know the model that daily changes in the stock market fit a normal or gaussian distribution is just pure crap. We don’t need to wait 70 years for 20 sigma event to know the model isn’t just wrong, it is stupid. We suspect this fact within the first year or two of data, and we know with high reliability that the model is garbage by 1922.
What I don’t understand is why anybody rational would ever state the data fits a normal distribution, in the trivially obvious event that even within a few hundred events, we see an overwhelmingly obvious excess of large daily changes from what a normal distribution would predict.
One need not go to a mathematically well-described alternative distribution to produce a much better prediction of how often 20 sigma events will actually occur. Simply plotting the histogram of daily changes will show the tail probability falling off much more slowly than a normal distribution predicts. Before the 20 sigma event occurs, one might by estimating from the actual distribution of changes underestimate its probably by a factor of a few (or not, I have not done the exercise on this data), but one will not have underestimated it by tens of orders of magnitude.
There is no “uncertainty” in modeling daily stock market changes as normally (or log-normally) distributed. One can look at just the first few 100 points in the measured data, to be CERTAIN that a normal or a log-normal distribution is simply wrong.
The model used is the Black-Scholes model with, as you point out, a normal distribution. It endures, despite being clearly wrong, because there doesn’t seem to be any good alternatives.
I doesn’t endure, not in Risk Management, anyways. Some alternatives for equities are e.g. the Heston Model or other stochastic volatilities approaches. Then, there is the whole filed of systemic risk which studies correlated crashes: events when a bunch of equities all crash at the same time are way more common than they should be and people are aware of this. See e.g. this anaysis that uses a Hawk model to capture the clustering of the crashes.
B-S endures, but is generally patched with insights like these.
I think I can see what you mean, and in fact I partially agree, so I’ll try to restate the argument. Correct me if you think I got it wrong. In my experience it’s true that B-S is still used for quick and dirty bulk calculations or by organizations that don’t have the means to implement more complex models. But the model’s shortcomings are very well understood by the industry, and risk managers absolutely don’t rely on this model when e.g. calculating the capital requirement for Basel or Solvency purposes. If they did, the regulators will utterly demolish their internal risk model.
There is still a lot of work to be done, and there is what you call model uncertainty at least when dealing with short time scales, but (fortunately) there’s been a lot of progress since B-S.
Why were you using an options-pricing model to predict stock returns? Black-Scholes is not used to model equity market returns.
How about the Black-Scholes model with a more realistic distribution?
Or does BS make annoying assumptions about its distribution, like that it has a well-defined variance and mean?
It assumes that the underlying model follows a Gaussian distribution but as Mandelbrot showed a Lévy distribution is a better model.
Black-Scholes is a name for a formula that was around before Black and Scholes published. Beforehand it was simply a heuristic used by traders. Those traders also did scale a few parameters around in a way that a normal distribution wouldn’t allow. Black-Scholes then went and proved the formula correct for a Gaussian distribution based on advanced math.
After Black-Scholes got a “nobel prize” people stated to believe that the formula is actually measuring real risk and betting accordingly. Betting like that is benefitial for traders who make bonuses when they win but who don’t suffer that much if they lose all the money they bet. Or a government bails you out when you lose all your money.
The problem with Levy distributions is that they have a parameter c that you can’t simply estimate by having a random sample in the way you can estimate all the parameters of Gaussian distribution if you have a big enough sample.
*I’m no expert on the subject but the above is my understanding from reading Taleb and other reading.
That is painfully visible.
I don’t think contributing noise to LW is useful.
Then please point me to the errors in the argument I made.
The point of having discussions is to refine ideas. I do not believe that potential of being wrong is a reason to avoid talking about something.
You didn’t make an argument. You posted a bunch of sentences which are, basically, nonsense.
The problem is not the potential of being wrong, the problem is coherency and having some basic knowledge in the area which you are talking about.
If you have no clue, ask for a gentle introduction, don’t spout nonsense on the forum.
I did read Nassim Taleb description of the situation in “Fooled By Randomness” and “The Black Swan”, so I’m not sure whether you think I misrepresented Taleb or whether you think Talebs arguments are simply wrong.
“No clue” is not a good representation of my knowledge because I did read Taleb as introduction for the topic. That doesn’t make me an expert, but it does give me a opinion on the topic that might be true or false.