“[subject] isn’t just about [subject matter]: it teaches you how to think”
Most (~70%) of the times it is a euphemism for “it’s useless, but we like it so we still want to use taxpayers’ money to teach it”.
(If people really cared about teaching people how to think, they’d teach cognitive psychology, probability and statistics, game theory, and the like, not stuff like Latin.)
(If people really cared about teaching people how to think, they’d teach cognitive psychology, probability and statistics, game theory, and the like, not stuff like Latin.)
I expect you’re typical-minding here. I know enough linguistics enthusiasts who feel that learning new languages makes you think in new ways that I believe that to be their genuine experience. Also because I personally find a slight difference in the way I think in different languages, though not as pronounced as those people.
Presumably they, being familiar with the thought-changing effects of Latin but not having felt the language-changing effects of cognitive psychology etc. (either because of not having studied those topics enough, or because of not having a mind whose thought patterns would be strongly affected by the study of them), would likewise say “if people really cared about teaching people how to think, they’d teach Latin and not stuff like cognitive psychology”. Just like you say what you say, either because of not having studied Latin enough, or because of not having a mind whose thought patterns would be strongly affected by the study of languages.
I know enough linguistics enthusiasts who feel that learning new languages makes you think in new ways that I believe that to be their genuine experience. Also because I personally find a slight difference in the way I think in different languages, though not as pronounced as those people.
Sure, but the same happens with living languages as well.
not having studied Latin enough
I studied Latin for five years. Sure, it is possible that if I had studied it longer it would have changed my thought patterns more, but surely there are cheaper ways of doing that. (Even the first couple months of studying linear algebra affected me more, but I don’t expect that to apply to everybody so I didn’t list it upthread.)
A while ago I read that a betting firm rather hires physics or math people than people with degrees in statistics because the statistics folks to often think that real world data is supposed to follow a normal distribution like the textbook example they faced in university.
Outside of specific statistics programs a lot of times statistics classes lead to students simply memorizing recipes and not really developing a good statistical intuition.
Teaching statistics sounds often much better in the abstract than in practice.
That’s a good point, but on the other hand, even thinking that everything is a Gaussian would be a vast improvement over thinking that everything is a Dirac delta and it is therefore not ludicrous to speculate about why some politician’s approval rating went down from 42.8% last week to 42.3% today when both figures come from surveys with a sample size of 1600.
A well trained mathematician or physicist who never took a formal course on statistics likely isn’t going to make that error, just as a well trained statistician isn’t going to make that error.
I would think that the mathematician is more likely to get this right than the medical doctor who got statistics lessons at med school.
because the statistics folks to often think that real world data is supposed to follow a normal distribution like the textbook example they faced in university.
That is, ahem, bullshit. Stupid undergrads might think so for a short while, “statistics folks” do not.
Long Term Capital Management (LTCM) was a hedge fund that lost billions of dollars because its founders, including nobel prize winners, assumed 1) things that have been uncorrelated for a while will remain uncorrelated, and 2) ridiculously low probabilities of failure calculated from assumptions that events are distributed normally actually apply to analyzing the likelihood of various disastrous investment strategies failing. That is, LTCM reported results as if something which is seen from data to be normal between +/- 2*sigma will be reliably normal out to 3, 4, 5, and 6 sigma.
Yes, there WERE people who knew LTCM were morons. But there were plenty who didn’t, including nobel prize winners with PhDs. It really happened and it still really happens.
I am familiar with LTCM and how it crashed and burned. I don’t think that people who ran it were morons or that they assumed returns will be normally distributed. LTCM’s blowup is a prime example of “Markets can stay irrational longer than you can stay solvent” (which should be an interesting lesson for LW people who are convinced markets are efficient).
LTCM failed when its convergence trades (which did NOT assume things will be uncorrelated or that returns will be Gaussian) diverged instead and LTCM could not meet margin calls.
Hindsight vision makes everything easy. Perhaps you’d like to point out today some obvious to you morons who didn’t blow up yet but certainly will?
“…only one year in fifty should it lose at least 20% of its portfolio.”
And of course, it proceeded to lose essentially all of its portfolio after operating for just a handful of years. Now if in fact you are correct and the LTCM’ers did understand things might be correlated and that tail probabilities would not be gaussian, how do you imagine they even made a calculation like that?
Can we get a bit more specific than waving around marketing materials?
Precisely which things turned out to be correlated that LTCM people assumed to be uncorrelated and precisely the returns on which positions the LTCM people assumed to be Gaussian when in fact they were not?
Or are you critiquing the VAR approach to risk management in general? There is a lot to critique, certainly, but would you care to suggest some adequate replacements?
I strongly suspect that a large part of its recent popularity is because in the recent CDO-driven crash it suited the interests of the (influential) people whose decisions were actually responsible to spread the idea that the problem was that those silly geeky quants didn’t understand that everything isn’t uncorrelated Gaussians, ha ha ha.
Given that I remember spending a year of AP statistics only doing calculations with things we assumed to be normally distributed, it’s not an unreasonable objection to at least some forms of teaching statistics.
Hopefully people with statistics degrees move beyond that stage, though.
Most (~70%) of the times it is a euphemism for “it’s useless, but we like it so we still want to use taxpayers’ money to teach it”.
(If people really cared about teaching people how to think, they’d teach cognitive psychology, probability and statistics, game theory, and the like, not stuff like Latin.)
I expect you’re typical-minding here. I know enough linguistics enthusiasts who feel that learning new languages makes you think in new ways that I believe that to be their genuine experience. Also because I personally find a slight difference in the way I think in different languages, though not as pronounced as those people.
Presumably they, being familiar with the thought-changing effects of Latin but not having felt the language-changing effects of cognitive psychology etc. (either because of not having studied those topics enough, or because of not having a mind whose thought patterns would be strongly affected by the study of them), would likewise say “if people really cared about teaching people how to think, they’d teach Latin and not stuff like cognitive psychology”. Just like you say what you say, either because of not having studied Latin enough, or because of not having a mind whose thought patterns would be strongly affected by the study of languages.
Sure, but the same happens with living languages as well.
I studied Latin for five years. Sure, it is possible that if I had studied it longer it would have changed my thought patterns more, but surely there are cheaper ways of doing that. (Even the first couple months of studying linear algebra affected me more, but I don’t expect that to apply to everybody so I didn’t list it upthread.)
A while ago I read that a betting firm rather hires physics or math people than people with degrees in statistics because the statistics folks to often think that real world data is supposed to follow a normal distribution like the textbook example they faced in university.
Outside of specific statistics programs a lot of times statistics classes lead to students simply memorizing recipes and not really developing a good statistical intuition.
Teaching statistics sounds often much better in the abstract than in practice.
That’s a good point, but on the other hand, even thinking that everything is a Gaussian would be a vast improvement over thinking that everything is a Dirac delta and it is therefore not ludicrous to speculate about why some politician’s approval rating went down from 42.8% last week to 42.3% today when both figures come from surveys with a sample size of 1600.
A well trained mathematician or physicist who never took a formal course on statistics likely isn’t going to make that error, just as a well trained statistician isn’t going to make that error.
I would think that the mathematician is more likely to get this right than the medical doctor who got statistics lessons at med school.
That is, ahem, bullshit. Stupid undergrads might think so for a short while, “statistics folks” do not.
Long Term Capital Management (LTCM) was a hedge fund that lost billions of dollars because its founders, including nobel prize winners, assumed 1) things that have been uncorrelated for a while will remain uncorrelated, and 2) ridiculously low probabilities of failure calculated from assumptions that events are distributed normally actually apply to analyzing the likelihood of various disastrous investment strategies failing. That is, LTCM reported results as if something which is seen from data to be normal between +/- 2*sigma will be reliably normal out to 3, 4, 5, and 6 sigma.
Yes, there WERE people who knew LTCM were morons. But there were plenty who didn’t, including nobel prize winners with PhDs. It really happened and it still really happens.
I am familiar with LTCM and how it crashed and burned. I don’t think that people who ran it were morons or that they assumed returns will be normally distributed. LTCM’s blowup is a prime example of “Markets can stay irrational longer than you can stay solvent” (which should be an interesting lesson for LW people who are convinced markets are efficient).
LTCM failed when its convergence trades (which did NOT assume things will be uncorrelated or that returns will be Gaussian) diverged instead and LTCM could not meet margin calls.
Hindsight vision makes everything easy. Perhaps you’d like to point out today some obvious to you morons who didn’t blow up yet but certainly will?
An LTCM investor letter, quoted here, says
And of course, it proceeded to lose essentially all of its portfolio after operating for just a handful of years. Now if in fact you are correct and the LTCM’ers did understand things might be correlated and that tail probabilities would not be gaussian, how do you imagine they even made a calculation like that?
Can we get a bit more specific than waving around marketing materials?
Precisely which things turned out to be correlated that LTCM people assumed to be uncorrelated and precisely the returns on which positions the LTCM people assumed to be Gaussian when in fact they were not?
Or are you critiquing the VAR approach to risk management in general? There is a lot to critique, certainly, but would you care to suggest some adequate replacements?
“Statisticians think everything is normally distributed” seems to be one of those weirdly enduring myths. I’d love to know how it gets propagated.
I strongly suspect that a large part of its recent popularity is because in the recent CDO-driven crash it suited the interests of the (influential) people whose decisions were actually responsible to spread the idea that the problem was that those silly geeky quants didn’t understand that everything isn’t uncorrelated Gaussians, ha ha ha.
Someone was overly impressed by the Central Limit Theorem… X-)
I can’t say I ran into it before (whereas “economists think humans are all rational self-interested agents”, jeez...)
Given that I remember spending a year of AP statistics only doing calculations with things we assumed to be normally distributed, it’s not an unreasonable objection to at least some forms of teaching statistics.
Hopefully people with statistics degrees move beyond that stage, though.
I read that Germans are often anti-semites, is it true?