Interesting! I came to it from googling about definitions of CLT in terms of convolutions. But I have one gripe:
does that mean the form of my uncertainty about things approaches Gaussian as I learn more?
I think a counterexample would be your uncertainty over the number of book sales for your next book. There are recursive network effects such that more book sales causes more book sales. The more books you (first-order) expect to sell, the more books you ought to (second-order) expect to sell. In other words, your expectation over X indirectly depends on your expectation over X (or at least, it ought to, insofar as there’s recursion in the territory as well).
This means that for every independent source of evidence you update on, you ought to apply a recursive correction in the upward direction. In which case you won’t converge to a symmetric Gaussian.
Which all a longwinded way to say that your posterior distribution will only converge to a Gaussian in proportion to the independence of your evidence.
Yes, agree—I’ve looked into non-identical distributions in previous posts, and found that identicality isn’t important, but I haven’t looked at non-independence at all. I agree dependent chains, like the books example, is an open question!
Interesting! I came to it from googling about definitions of CLT in terms of convolutions. But I have one gripe:
I think a counterexample would be your uncertainty over the number of book sales for your next book. There are recursive network effects such that more book sales causes more book sales. The more books you (first-order) expect to sell, the more books you ought to (second-order) expect to sell. In other words, your expectation over X indirectly depends on your expectation over X (or at least, it ought to, insofar as there’s recursion in the territory as well).
This means that for every independent source of evidence you update on, you ought to apply a recursive correction in the upward direction. In which case you won’t converge to a symmetric Gaussian.
Which all a longwinded way to say that your posterior distribution will only converge to a Gaussian in proportion to the independence of your evidence.
Or something like that.
Yes, agree—I’ve looked into non-identical distributions in previous posts, and found that identicality isn’t important, but I haven’t looked at non-independence at all. I agree dependent chains, like the books example, is an open question!