I’ve noticed a lot of disciplines, particularly ones that sometimes have to justify their value, often make a similar claim:
“[subject] isn’t just about [subject matter]: it teaches you how to think”
This raises some interesting questions:
I can believe, for example, that Art History instils in its students some useful habits of thought, but I suspect they’re less general than those from a discipline with an explicit problem-solving focus. What kind of scheme could one construct to score the meta-cognitive skills learned from a particular subject?
Are there any subjects which are particularly unlikely to make this claim? Are any subjects just composed of procedural knowledge without any overarching theory, cross-domain applicability, or necessary transferable skills?
Are there particularly potent combinations of skills, or particularly useless ones? It seems that a Physics degree and a Maths degree would have similar “coverage” in terms of thinking habits they instil, but a Physics degree and a Law degree would have much broader coverage. “I have technical skills, but I also have people-skills” is a fairly standard contemporary idea. Do Physics and Law have strikingly different coverages because Physics Lawyers don’t really need to exist?
The claim isn’t just made with arguably useless disciplines, though. Many people argue (quite rightly, IMO) that programming doesn’t just teach you to command machines to do your bidding, but also instils powerful thinking tools. So even if kids don’t grow up to be software developers, it’s still valuable for them to learn programming. Similar arguments could be made for law or finance.
Slightly off topic, but I both program and play guitar and for the longest time I was wondering why I was getting an overwhelming feeling of the two bleeding into each other. While playing guitar, it would “feel” like I was also coding. Eventually I figured out that the common thread is probably the general task of algorithm optimization.
There’s no way for me to tell if programming made me a better guitar player or vice versa.
So even if kids don’t grow up to be software developers, it’s still valuable for them to learn programming. Similar arguments could be made for law or finance.
Could you make that argument for finance? I see that learning finance is very useful for personal financial decisions but how does it provide use beyond that?
Obviously “finance” is a very wide area that covers a lot of different ideas, but my observation of “finance people” is that they have a powerful mental vocabulary for thinking about what kind of a value something is and what can be done with it over time. For example: the difference between stock values and flow values, expected return of a portfolio of assets, the leveraging of credit, the mitigation of risk.
More generally, they seem to be able to look at some number assigned to a thing, and observe that it’s morphologically similar to some other number assigned to some different thing, and understand what sort of things can happen to both those numbers, and what sort of process is required to turn one sort of number into another sort of number.
Finance is about marshalling resources and using them to efficiently create a lot more wealth. Since wealth is at minimum the thing that keeps us from working 24⁄7 on getting enough food to eat, and generally gives us the kind of free time we need to invent AIs, post on message boards, have hobbies, and try to get the hot chicks, it can be quite useful even for a non-wall-street worker. Think of finance as the thing that keeps you from carrying a balance on your credit card or buying lottery tickets as investments.
That’s not an argument for the claim that finance skills instill thinking tools that are useful in other domains. It’s just an argument that finance skills are useful.
Seems to me that “teaches you how to think” does not necessarily imply instilling habits of thought. I would interpret that (say, in the context of Art History) as:
Supplying you with some maps of unknown to you territory
Giving you some tools to explore and map the territory further
Pointing you towards some well-worn tracks as “default” ways of thinking about the issues involved
The habits of thought are not involved in all of this—it’s more of a broadening-your-horizons exercise.
“[subject] isn’t just about [subject matter]: it teaches you how to think”
Most (~70%) of the times it is a euphemism for “it’s useless, but we like it so we still want to use taxpayers’ money to teach it”.
(If people really cared about teaching people how to think, they’d teach cognitive psychology, probability and statistics, game theory, and the like, not stuff like Latin.)
(If people really cared about teaching people how to think, they’d teach cognitive psychology, probability and statistics, game theory, and the like, not stuff like Latin.)
I expect you’re typical-minding here. I know enough linguistics enthusiasts who feel that learning new languages makes you think in new ways that I believe that to be their genuine experience. Also because I personally find a slight difference in the way I think in different languages, though not as pronounced as those people.
Presumably they, being familiar with the thought-changing effects of Latin but not having felt the language-changing effects of cognitive psychology etc. (either because of not having studied those topics enough, or because of not having a mind whose thought patterns would be strongly affected by the study of them), would likewise say “if people really cared about teaching people how to think, they’d teach Latin and not stuff like cognitive psychology”. Just like you say what you say, either because of not having studied Latin enough, or because of not having a mind whose thought patterns would be strongly affected by the study of languages.
I know enough linguistics enthusiasts who feel that learning new languages makes you think in new ways that I believe that to be their genuine experience. Also because I personally find a slight difference in the way I think in different languages, though not as pronounced as those people.
Sure, but the same happens with living languages as well.
not having studied Latin enough
I studied Latin for five years. Sure, it is possible that if I had studied it longer it would have changed my thought patterns more, but surely there are cheaper ways of doing that. (Even the first couple months of studying linear algebra affected me more, but I don’t expect that to apply to everybody so I didn’t list it upthread.)
A while ago I read that a betting firm rather hires physics or math people than people with degrees in statistics because the statistics folks to often think that real world data is supposed to follow a normal distribution like the textbook example they faced in university.
Outside of specific statistics programs a lot of times statistics classes lead to students simply memorizing recipes and not really developing a good statistical intuition.
Teaching statistics sounds often much better in the abstract than in practice.
That’s a good point, but on the other hand, even thinking that everything is a Gaussian would be a vast improvement over thinking that everything is a Dirac delta and it is therefore not ludicrous to speculate about why some politician’s approval rating went down from 42.8% last week to 42.3% today when both figures come from surveys with a sample size of 1600.
A well trained mathematician or physicist who never took a formal course on statistics likely isn’t going to make that error, just as a well trained statistician isn’t going to make that error.
I would think that the mathematician is more likely to get this right than the medical doctor who got statistics lessons at med school.
because the statistics folks to often think that real world data is supposed to follow a normal distribution like the textbook example they faced in university.
That is, ahem, bullshit. Stupid undergrads might think so for a short while, “statistics folks” do not.
Long Term Capital Management (LTCM) was a hedge fund that lost billions of dollars because its founders, including nobel prize winners, assumed 1) things that have been uncorrelated for a while will remain uncorrelated, and 2) ridiculously low probabilities of failure calculated from assumptions that events are distributed normally actually apply to analyzing the likelihood of various disastrous investment strategies failing. That is, LTCM reported results as if something which is seen from data to be normal between +/- 2*sigma will be reliably normal out to 3, 4, 5, and 6 sigma.
Yes, there WERE people who knew LTCM were morons. But there were plenty who didn’t, including nobel prize winners with PhDs. It really happened and it still really happens.
I am familiar with LTCM and how it crashed and burned. I don’t think that people who ran it were morons or that they assumed returns will be normally distributed. LTCM’s blowup is a prime example of “Markets can stay irrational longer than you can stay solvent” (which should be an interesting lesson for LW people who are convinced markets are efficient).
LTCM failed when its convergence trades (which did NOT assume things will be uncorrelated or that returns will be Gaussian) diverged instead and LTCM could not meet margin calls.
Hindsight vision makes everything easy. Perhaps you’d like to point out today some obvious to you morons who didn’t blow up yet but certainly will?
“…only one year in fifty should it lose at least 20% of its portfolio.”
And of course, it proceeded to lose essentially all of its portfolio after operating for just a handful of years. Now if in fact you are correct and the LTCM’ers did understand things might be correlated and that tail probabilities would not be gaussian, how do you imagine they even made a calculation like that?
Can we get a bit more specific than waving around marketing materials?
Precisely which things turned out to be correlated that LTCM people assumed to be uncorrelated and precisely the returns on which positions the LTCM people assumed to be Gaussian when in fact they were not?
Or are you critiquing the VAR approach to risk management in general? There is a lot to critique, certainly, but would you care to suggest some adequate replacements?
I strongly suspect that a large part of its recent popularity is because in the recent CDO-driven crash it suited the interests of the (influential) people whose decisions were actually responsible to spread the idea that the problem was that those silly geeky quants didn’t understand that everything isn’t uncorrelated Gaussians, ha ha ha.
Given that I remember spending a year of AP statistics only doing calculations with things we assumed to be normally distributed, it’s not an unreasonable objection to at least some forms of teaching statistics.
Hopefully people with statistics degrees move beyond that stage, though.
I suspect that with “mastery of a skill” comes an ability to understand “mastery”, in that—on a variation of man-with-a-hammer syndrome; holding the mastery of one area will help you better understand the direction to head in when mastering other areas, and learning in other areas.
to me the line now reads; “mastery of [subject] isn’t just about [subject matter]: mastery teaches you how to think”
where can vary; the significance of what people are trying to convey is maybe not in the but in the experience of learning.
I’ve noticed a lot of disciplines, particularly ones that sometimes have to justify their value, often make a similar claim:
“[subject] isn’t just about [subject matter]: it teaches you how to think”
This raises some interesting questions:
I can believe, for example, that Art History instils in its students some useful habits of thought, but I suspect they’re less general than those from a discipline with an explicit problem-solving focus. What kind of scheme could one construct to score the meta-cognitive skills learned from a particular subject?
Are there any subjects which are particularly unlikely to make this claim? Are any subjects just composed of procedural knowledge without any overarching theory, cross-domain applicability, or necessary transferable skills?
Are there particularly potent combinations of skills, or particularly useless ones? It seems that a Physics degree and a Maths degree would have similar “coverage” in terms of thinking habits they instil, but a Physics degree and a Law degree would have much broader coverage. “I have technical skills, but I also have people-skills” is a fairly standard contemporary idea. Do Physics and Law have strikingly different coverages because Physics Lawyers don’t really need to exist?
I would interpret that claim as: “we may be practically useless, but we are still fucking high-status!” :D
The claim isn’t just made with arguably useless disciplines, though. Many people argue (quite rightly, IMO) that programming doesn’t just teach you to command machines to do your bidding, but also instils powerful thinking tools. So even if kids don’t grow up to be software developers, it’s still valuable for them to learn programming. Similar arguments could be made for law or finance.
Slightly off topic, but I both program and play guitar and for the longest time I was wondering why I was getting an overwhelming feeling of the two bleeding into each other. While playing guitar, it would “feel” like I was also coding. Eventually I figured out that the common thread is probably the general task of algorithm optimization.
There’s no way for me to tell if programming made me a better guitar player or vice versa.
Could you make that argument for finance? I see that learning finance is very useful for personal financial decisions but how does it provide use beyond that?
Obviously “finance” is a very wide area that covers a lot of different ideas, but my observation of “finance people” is that they have a powerful mental vocabulary for thinking about what kind of a value something is and what can be done with it over time. For example: the difference between stock values and flow values, expected return of a portfolio of assets, the leveraging of credit, the mitigation of risk.
More generally, they seem to be able to look at some number assigned to a thing, and observe that it’s morphologically similar to some other number assigned to some different thing, and understand what sort of things can happen to both those numbers, and what sort of process is required to turn one sort of number into another sort of number.
Finance is about marshalling resources and using them to efficiently create a lot more wealth. Since wealth is at minimum the thing that keeps us from working 24⁄7 on getting enough food to eat, and generally gives us the kind of free time we need to invent AIs, post on message boards, have hobbies, and try to get the hot chicks, it can be quite useful even for a non-wall-street worker. Think of finance as the thing that keeps you from carrying a balance on your credit card or buying lottery tickets as investments.
That’s not an argument for the claim that finance skills instill thinking tools that are useful in other domains. It’s just an argument that finance skills are useful.
Physics lawyers definitely need to exist. I would strongly like to get an injunction against the laws of thermodynamics.
Seems to me that “teaches you how to think” does not necessarily imply instilling habits of thought. I would interpret that (say, in the context of Art History) as:
Supplying you with some maps of unknown to you territory
Giving you some tools to explore and map the territory further
Pointing you towards some well-worn tracks as “default” ways of thinking about the issues involved
The habits of thought are not involved in all of this—it’s more of a broadening-your-horizons exercise.
Most (~70%) of the times it is a euphemism for “it’s useless, but we like it so we still want to use taxpayers’ money to teach it”.
(If people really cared about teaching people how to think, they’d teach cognitive psychology, probability and statistics, game theory, and the like, not stuff like Latin.)
I expect you’re typical-minding here. I know enough linguistics enthusiasts who feel that learning new languages makes you think in new ways that I believe that to be their genuine experience. Also because I personally find a slight difference in the way I think in different languages, though not as pronounced as those people.
Presumably they, being familiar with the thought-changing effects of Latin but not having felt the language-changing effects of cognitive psychology etc. (either because of not having studied those topics enough, or because of not having a mind whose thought patterns would be strongly affected by the study of them), would likewise say “if people really cared about teaching people how to think, they’d teach Latin and not stuff like cognitive psychology”. Just like you say what you say, either because of not having studied Latin enough, or because of not having a mind whose thought patterns would be strongly affected by the study of languages.
Sure, but the same happens with living languages as well.
I studied Latin for five years. Sure, it is possible that if I had studied it longer it would have changed my thought patterns more, but surely there are cheaper ways of doing that. (Even the first couple months of studying linear algebra affected me more, but I don’t expect that to apply to everybody so I didn’t list it upthread.)
A while ago I read that a betting firm rather hires physics or math people than people with degrees in statistics because the statistics folks to often think that real world data is supposed to follow a normal distribution like the textbook example they faced in university.
Outside of specific statistics programs a lot of times statistics classes lead to students simply memorizing recipes and not really developing a good statistical intuition.
Teaching statistics sounds often much better in the abstract than in practice.
That’s a good point, but on the other hand, even thinking that everything is a Gaussian would be a vast improvement over thinking that everything is a Dirac delta and it is therefore not ludicrous to speculate about why some politician’s approval rating went down from 42.8% last week to 42.3% today when both figures come from surveys with a sample size of 1600.
A well trained mathematician or physicist who never took a formal course on statistics likely isn’t going to make that error, just as a well trained statistician isn’t going to make that error.
I would think that the mathematician is more likely to get this right than the medical doctor who got statistics lessons at med school.
That is, ahem, bullshit. Stupid undergrads might think so for a short while, “statistics folks” do not.
Long Term Capital Management (LTCM) was a hedge fund that lost billions of dollars because its founders, including nobel prize winners, assumed 1) things that have been uncorrelated for a while will remain uncorrelated, and 2) ridiculously low probabilities of failure calculated from assumptions that events are distributed normally actually apply to analyzing the likelihood of various disastrous investment strategies failing. That is, LTCM reported results as if something which is seen from data to be normal between +/- 2*sigma will be reliably normal out to 3, 4, 5, and 6 sigma.
Yes, there WERE people who knew LTCM were morons. But there were plenty who didn’t, including nobel prize winners with PhDs. It really happened and it still really happens.
I am familiar with LTCM and how it crashed and burned. I don’t think that people who ran it were morons or that they assumed returns will be normally distributed. LTCM’s blowup is a prime example of “Markets can stay irrational longer than you can stay solvent” (which should be an interesting lesson for LW people who are convinced markets are efficient).
LTCM failed when its convergence trades (which did NOT assume things will be uncorrelated or that returns will be Gaussian) diverged instead and LTCM could not meet margin calls.
Hindsight vision makes everything easy. Perhaps you’d like to point out today some obvious to you morons who didn’t blow up yet but certainly will?
An LTCM investor letter, quoted here, says
And of course, it proceeded to lose essentially all of its portfolio after operating for just a handful of years. Now if in fact you are correct and the LTCM’ers did understand things might be correlated and that tail probabilities would not be gaussian, how do you imagine they even made a calculation like that?
Can we get a bit more specific than waving around marketing materials?
Precisely which things turned out to be correlated that LTCM people assumed to be uncorrelated and precisely the returns on which positions the LTCM people assumed to be Gaussian when in fact they were not?
Or are you critiquing the VAR approach to risk management in general? There is a lot to critique, certainly, but would you care to suggest some adequate replacements?
“Statisticians think everything is normally distributed” seems to be one of those weirdly enduring myths. I’d love to know how it gets propagated.
I strongly suspect that a large part of its recent popularity is because in the recent CDO-driven crash it suited the interests of the (influential) people whose decisions were actually responsible to spread the idea that the problem was that those silly geeky quants didn’t understand that everything isn’t uncorrelated Gaussians, ha ha ha.
Someone was overly impressed by the Central Limit Theorem… X-)
I can’t say I ran into it before (whereas “economists think humans are all rational self-interested agents”, jeez...)
Given that I remember spending a year of AP statistics only doing calculations with things we assumed to be normally distributed, it’s not an unreasonable objection to at least some forms of teaching statistics.
Hopefully people with statistics degrees move beyond that stage, though.
I read that Germans are often anti-semites, is it true?
I suspect that with “mastery of a skill” comes an ability to understand “mastery”, in that—on a variation of man-with-a-hammer syndrome; holding the mastery of one area will help you better understand the direction to head in when mastering other areas, and learning in other areas.
to me the line now reads; “mastery of [subject] isn’t just about [subject matter]: mastery teaches you how to think”
where can vary; the significance of what people are trying to convey is maybe not in the but in the experience of learning.