I once took a math course where the first homework assignment involved sending the professor an email that included what we wanted to learn in the course (this assignment was mostly for logistical reasons: professor’s email now autocompletes, eliminating a trivial inconvenience of emailing him questions and such, professor has all our emails, etc). I had trouble answering the question, since I was after learning unknown unknowns, thereby making it difficult to express what exactly it was I was looking to learn. Most mathematicians I’ve talked to agree that, more or less, what is taught in secondary school under the heading of “math” is not math, and it certainly bears only a passing resemblance to what mathematicians actually do. You are certainly correct that the thing labelled in secondary schools as “math” is probably better learned differently, but insofar as you’re looking to learn the thing that mathematicians refer to as “math” (and the fact you’re looking at Spivak’s Calculus indicates you, in fact, are), looking at how to better learn the thing secondary schools refer to as “math” isn’t actually helpful. So, let’s try to get a better idea of what mathematicians refer to as math and then see what we can do.
The two best pieces I’ve read that really delve into the gap between secondary school “math” and mathematician’s “math” are Lockhart’s Lament and Terry Tao’s Three Levels of Rigour. The common thread between them is that secondary school “math” involves computation, whereas mathematician’s “math” is about proof. For whatever reason, computation is taught with little motivation, largely analogously to the “intolerably boring” approach to language acquisition; proof, on the other hand, is mostly taught by proving a bunch of things which, unlike computation, typically takes some degree of creativity, meaning it can’t be taught in a rote manner. In general, a student of mathematics learns proofs by coming to accept a small set of highly general proof strategies (to prove a theorem of the form “if P then Q”, assume P and derive Q); they first practice them on the simplest problems available (usually set theory) and then on progressively more complex problems. To continue Lockhart’s analogy to music, this is somewhat like learning how to read the relevant clef for your instrument and then playing progressively more difficult music, starting with scales. [1] There’s some amount of symbol-pushing, but most of the time, there’s insight to be gleaned from it (although, sometimes, you just have to say “this is the correct result because the algebra says so”, but this isn’t overly common).
Proofs themselves are interesting creatures. In most schools, there’s a “transition course” that takes aspiring math majors who have heretofore only done computation and trains them to write proofs; any proofy math book written for any other course just assumes this knowledge but, in my experience (both personally and working with other students), trying to make sense of what’s going on in these books without familiarity with what makes a proof valid or not just doesn’t work; it’s not entirely unlike trying to understand a book on arithmetic that just assumes you understand what the + and * symbols mean. This transition course more or less teaches you to speak and understand a funny language mathematicians use to communicate why mathematical propositions are correct; without taking the time to learn this funny language, you can’t really understand why the proof of a theorem actually does show the theorem is correct, nor will you be able to glean any insight as to why, on an intuitive level, the theorem is true (this is why I doubt you’d have much success trying to read Spivak, absent a transition course). After the transition course, this funny language becomes second nature, it’s clear that the proofs after theorem statements, indeed, prove the theorems they claim to prove, and it’s often possible, with a bit of work [2], to get an intuitive appreciation for why the theorem is true.
To summarize: the math I think you’re looking to learn is proofy, not computational, in nature. This type of math is inherently impossible to learn in a rote manner; instead, you get to spend hours and hours by yourself trying to prove propositions [3] which isn’t dull, but may take some practice to appreciate (as noted below, if you’re at the right level, this activity should be flow-inducing). The first step is to do a transition, which will teach you how to write proofs and discriminate between correct proofs from incorrect; there will probably some set theory.
So, you want to transition; what’s the best way to do it?
Well, super ideally, the best way is to have an experienced teacher explain what’s going on, connecting the intuitive with the rigorous, available to answer questions. For most things mathematical, assuming a good book exists, I think it can be learned entirely from a book, but this is an exception. That said, How to Prove It is highly rated, I had a good experience with it, and other’s I’ve recommended it to have done well. If you do decide to take this approach and have questions, pm me your email address and I’ll do what I can.
This analogy breaks down somewhat when you look at the arc musicians go through. The typical progression for musicians I know is (1) start playing in whatever grade the music program of the school I’m attending starts, (2) focus mainly on ensemble (band, orchestra) playing, (3) after a high (>90%) attrition rate, we’re left with three groups: those who are in it for easy credit (orchestra doesn’t have homework!); those who practice a little, but are too busy or not interested enough to make a consistent effort; and those who are really serious. By the time they reach high school, everyone in this third group has private instructors and, if they’re really serious about getting good, goes back and spends a lot of times practicing scales. Even at the highest level, musicians review scales, often daily, because they’re the most fundamental thing: I once had the opportunity to ask Gloria dePasquale what the best way to improve general ability, and she told me that there’s 12 major scales and 36 minor scales and, IIRC, that she practices all of them every day. Getting back to math, there’s a lot here that’s not analogous to math. Most notably, there’s no analogue to practicing scales, no fundamental-level thing that you can put large amounts of time into practicing and get general returns to mathematical ability: there’s just proofs, and once you can tell a valid proof from an invalid proof, there’s almost no value that comes from studying set theory proofs very closely. There’s certainly an aesthetic sense that can be refined, but studying whatever proofs happen to be at to slightly above your current level is probably the most helpful (like in flow), if it’s too easy, you’re just bored and learn nothing (there’s nothing there to learn), and if it’s too hard, you get frustrated and still learn nothing (since you’re unable to understand what’s going on).)
“With a bit of work”, used in a math text, means that a mathematically literate reader who has understood everything up until the phrase’s invocation should be able to come up with the result themselves, that it will require no real new insight; “with a bit of work, it can be shown that, for every positive integer n, (1 + 1/n)^n < e < (1 + 1/n)^(n+1)”. This does not preclude needing to do several pages of scratch work or spending a few minutes trying various approaches until you figure out one that works; the tendency is for understatement. Related, most math texts will often leave proofs that require no novel insights or weird tricks as exercises for the reader. In Linear Algebra Done Right, for instance, Axler will often state a theorem followed by “as you should verify”, which should require some writing on the reader’s part; he explicitly spells this out in the preface, but this is standard in every math text I’ve read (and I only bother reading the best ones). You cannot read mathematics like a novel; as Axler notes, it can often take over an hour to work through a single page of text.
Most math books present definitions, state theorems, and give proofs. In general, you definitely want to spend a bit of time pondering definitions; notice why they’re correct/how the match your intuition, and seeing why other definitions weren’t used. When you come to a theorem, you should always take a few minutes to try to prove it before reading the book’s proof. If you succeed, you’ll probably learn something about how to write proofs better by comparing what you have to what the book has, and if you fail, you’ll be better acquainted with the problem and thus have more of an idea as to why the book’s doing what it’s doing; it’s just an empirical result (which I read ages ago and cannot find) that you’ll understand a theorem better by trying to prove it yourself, successful or not. It’s also good practice. There’s some room for Anki (I make cards for definitions—word on front, definition on back—and theorems—for which reviews consist of outlining enough of a proof that I’m confident I could write it out fully if I so desired to) but I spend the vast majority of my time trying to prove things.
Your comment made me think, and I’ll look up some of the recommendations. I like the analogy with musicians and also the part where you talked about how the analogy breaks down.
However, I’d like to offer a bit of a different perspective to the original poster on this part of what you said.
To summarize: the math I think you’re looking to learn is proofy, not computational, in nature.
Your advice is good, given this assumption. But this assumption may or may not be true. Given that the post says:
I don’t care what field it is.
I think there’s the possibility that the original poster would be interested in computational mathematics.
Also, it’s not either or. It’s a false dichotomy. Learning both is possible and useful. You likely know this already, and perhaps the original poster does as well, but since the original poster is not familiar with much math, I thought I’d point that out in case it’s something that wasn’t obvious. It’s hard to tell, writing on the computer and imagining a person at the other end.
If the word “computational” is being used to mean following instructions by rote without really understanding why, or doing the same thing over and over with no creativity or insight, then it does not seem to be what the original poster is looking for. However, if it is used to mean creatively understanding real world problems, and formulating them well enough into math that computer algorithms can help give insights about them, then I didn’t see anything in the post that would make me warn them to steer clear of it.
There are whole fields of human endeavor that use math and include the term “computational” and I wouldn’t want the original poster to miss out on them because of not realizing that the word may mean something else in a different context, or to think that it’s something that professional mathematicians or scientists or engineers don’t do much. Some mathematicians do proofs most of the time, but others spend time on computation, or even proofs about computation.
Fields include computational fluid dynamics, computational biology, computational geometry...the list goes on.
Speaking of words meaning different things in different contexts, that’s one thing that tripped me up when I was first learning some engineering and math beyond high school. When I read more advanced books, I knew when I was looking at an unfamiliar word that I had to look it up, but I hadn’t realized that some words that I already was familiar with had been redefined to mean something else, given the context, or that the notation had symbols that meant one thing in one context and another thing in another context. For example, vertical bars on either side of something could mean “the absolute value of” or it could mean “the determinant of this matrix”, and “normal forces” meant “forces perpendicular to the contact surface”. Textbooks are generally terribly written and often leave out a lot.
In other words, the jargon can be sneaky and sound exactly like words that you already know. It’s part of why mathematical books seem so nonsensical to outsiders.
Excellent points; “rigorous” would have been a better choice. I haven’t yet had the time to study any computational fields, but I’m assuming the ones you list aren’t built on the “fuzzy notions, and hand-waving” that Tao talks about.
I should also add I don’t necessarily agree 100% with every in Lockhart’s Lament; I do think, however, that he does an excellent job of identifying problems in how secondary school math is taught and does a better job than I could of contrasting “follow the instructions” math with “real” math to a lay person.
Interesting. One of my recurring themes is that mathematics and statistics are very different things and require different kind of brains/thinking—people good at one will rarely be good at the other, too.
If you define mathematics as being about proofs (and not so much about computation), the distinction becomes more pronounced: statistics isn’t about proofs at all, it’s about dealing with uncertainty. There are certainly areas where they touch (e.g. proving that certain estimators have certain properties), but at their core, mathematics and statistics are not similar at all.
I’m skeptical that there is any such distinction. “Computational” math is near-worthless in the absence of a proof of correctness for what you’re computing. Even statistics relies on such proofs, though sometimes these can only provide approximate results. (For instance, maximum-likelihood methods are precisely optimal under the simplifying assumption of uniform priors and a 0-1 loss function.)
Statistics is an applied science, similar to engineering. It has to deal with the messy world where you might need to draw conclusions from a small data set of uncertain provenance where some outliers might be data entry mistakes (or maybe not), you are uncertain of the shape of the distributions you are dealing with, have a sneaking suspicion that the underlying process is not stable in time, etc. etc. None of the nice assumptions underlying nice proofs of optimality apply. You still need to analyse this data set.
Except it’s not math. Disciplines are socially constructed, statistics is what statisticians do. Applied math is what applied math people do. There are lots of very theoretical stats departments. I think you are having a similar confusion people have sometimes about computer science and programming.
I think if you say stuff like “well, all those people who publish in Annals of Statistics are applied math people” I am not sure what you are really saying. There is some intersection w/ applied math, ML, etc., but theoretical stats has their own set of big ideas that define the field and give it character.
I think you are having a similar confusion people have sometimes about computer science and programming.
I don’t think I do? I am well aware of the famous Dijkstra’s quote.
As you mentioned, statistics is what statisticians do. Most statisticians don’t work in academia. I don’t doubt there are a lot of theory-heavy stats deparments, just like there are a lot of physics-heavy engineering departments.
Going up one meta-level, I’m less interested in what discipline boundaries have the social reality constructed, and more interested in feeling for the joint in the underlying territory.
Not sure why we are having this discussion. Statistics is a discipline with certain themes, like “intelligently using data for conclusions we want.” These themes are sufficient to give it its own character, and make it both an applied and theoretical discipline. I don’t think you are a statistician, right? Why are you talking about this?
Statistics is as much an applied discipline as physics.
You can post about whatever you want. I have objections if you start mischaracterizing what statistics is about for fun on the internet. Fun on the internet is great, being snarky on the internet is ok, misleading people is not.
edit: In fact, you can view this whole recent “data science” thing that statisticians are so worried about as a reaction to the statistics discipline becoming too theoretical and divorced from actual data analysis problems. [This is a controversial opinion, I don’t think I share it, quite.]
I don’t believe I’m mischaracterizing statistics. My original point was an observation that, in my experience, good mathematicians and good statisticians are different. Their brains work differently. To use an imperfect analogy, good C programmers and good Lisp programmers are also quite different. You just need to think in a very different manner in Lisp compared to C (and vice versa). That, of course, doesn’t mean that a C programmer can’t be passably good in Lisp.
I understand that in the academia statistics departments usually focus on theoretical statistics. That’s fine—I don’t in particular care about “official” discipline boundaries. For my purposes I would like to draw a divide between theoretical statistics and, let’s call it practical statistics. I find it useful to classify theoretical statistics as applied math, and practical statistics as something different from that.
Data science is somewhat different from traditional statistics, but I’m not sure its distinction lies on the theoretical-practical divide. As a crude approximation, I’d say that traditional statistics is mostly concerned with extracting precise and “provable” information out of small data sets, and data science tends to drown in data and so loves non-parametric models and ML in particular.
None of the nice assumptions underlying nice proofs of optimality apply.
Well, this is a matter of degree. There is a reason we use these tools in the first place. A good statistician must be quite aware of the underlying assumptions of each tool, if only so that they can switch to something else when warranted. (For instance, use “robust” methods which try to identify and appropriately discount outliers.)
A good statistician must be quite aware of the underlying assumptions of each tool
Well, of course.
and appropriately discount outliers
Heh. The word “appropriately” is a tricky one. There is a large variety of robust methods which use different ways of discounting outliers, naturally with different results. The statistician will need to figure out what’s “appropriate” in this particular case and proofs don’t help here.
I once took a math course where the first homework assignment involved sending the professor an email that included what we wanted to learn in the course (this assignment was mostly for logistical reasons: professor’s email now autocompletes, eliminating a trivial inconvenience of emailing him questions and such, professor has all our emails, etc). I had trouble answering the question, since I was after learning unknown unknowns, thereby making it difficult to express what exactly it was I was looking to learn. Most mathematicians I’ve talked to agree that, more or less, what is taught in secondary school under the heading of “math” is not math, and it certainly bears only a passing resemblance to what mathematicians actually do. You are certainly correct that the thing labelled in secondary schools as “math” is probably better learned differently, but insofar as you’re looking to learn the thing that mathematicians refer to as “math” (and the fact you’re looking at Spivak’s Calculus indicates you, in fact, are), looking at how to better learn the thing secondary schools refer to as “math” isn’t actually helpful. So, let’s try to get a better idea of what mathematicians refer to as math and then see what we can do.
The two best pieces I’ve read that really delve into the gap between secondary school “math” and mathematician’s “math” are Lockhart’s Lament and Terry Tao’s Three Levels of Rigour. The common thread between them is that secondary school “math” involves computation, whereas mathematician’s “math” is about proof. For whatever reason, computation is taught with little motivation, largely analogously to the “intolerably boring” approach to language acquisition; proof, on the other hand, is mostly taught by proving a bunch of things which, unlike computation, typically takes some degree of creativity, meaning it can’t be taught in a rote manner. In general, a student of mathematics learns proofs by coming to accept a small set of highly general proof strategies (to prove a theorem of the form “if P then Q”, assume P and derive Q); they first practice them on the simplest problems available (usually set theory) and then on progressively more complex problems. To continue Lockhart’s analogy to music, this is somewhat like learning how to read the relevant clef for your instrument and then playing progressively more difficult music, starting with scales. [1] There’s some amount of symbol-pushing, but most of the time, there’s insight to be gleaned from it (although, sometimes, you just have to say “this is the correct result because the algebra says so”, but this isn’t overly common).
Proofs themselves are interesting creatures. In most schools, there’s a “transition course” that takes aspiring math majors who have heretofore only done computation and trains them to write proofs; any proofy math book written for any other course just assumes this knowledge but, in my experience (both personally and working with other students), trying to make sense of what’s going on in these books without familiarity with what makes a proof valid or not just doesn’t work; it’s not entirely unlike trying to understand a book on arithmetic that just assumes you understand what the + and * symbols mean. This transition course more or less teaches you to speak and understand a funny language mathematicians use to communicate why mathematical propositions are correct; without taking the time to learn this funny language, you can’t really understand why the proof of a theorem actually does show the theorem is correct, nor will you be able to glean any insight as to why, on an intuitive level, the theorem is true (this is why I doubt you’d have much success trying to read Spivak, absent a transition course). After the transition course, this funny language becomes second nature, it’s clear that the proofs after theorem statements, indeed, prove the theorems they claim to prove, and it’s often possible, with a bit of work [2], to get an intuitive appreciation for why the theorem is true.
To summarize: the math I think you’re looking to learn is proofy, not computational, in nature. This type of math is inherently impossible to learn in a rote manner; instead, you get to spend hours and hours by yourself trying to prove propositions [3] which isn’t dull, but may take some practice to appreciate (as noted below, if you’re at the right level, this activity should be flow-inducing). The first step is to do a transition, which will teach you how to write proofs and discriminate between correct proofs from incorrect; there will probably some set theory.
So, you want to transition; what’s the best way to do it?
Well, super ideally, the best way is to have an experienced teacher explain what’s going on, connecting the intuitive with the rigorous, available to answer questions. For most things mathematical, assuming a good book exists, I think it can be learned entirely from a book, but this is an exception. That said, How to Prove It is highly rated, I had a good experience with it, and other’s I’ve recommended it to have done well. If you do decide to take this approach and have questions, pm me your email address and I’ll do what I can.
This analogy breaks down somewhat when you look at the arc musicians go through. The typical progression for musicians I know is (1) start playing in whatever grade the music program of the school I’m attending starts, (2) focus mainly on ensemble (band, orchestra) playing, (3) after a high (>90%) attrition rate, we’re left with three groups: those who are in it for easy credit (orchestra doesn’t have homework!); those who practice a little, but are too busy or not interested enough to make a consistent effort; and those who are really serious. By the time they reach high school, everyone in this third group has private instructors and, if they’re really serious about getting good, goes back and spends a lot of times practicing scales. Even at the highest level, musicians review scales, often daily, because they’re the most fundamental thing: I once had the opportunity to ask Gloria dePasquale what the best way to improve general ability, and she told me that there’s 12 major scales and 36 minor scales and, IIRC, that she practices all of them every day. Getting back to math, there’s a lot here that’s not analogous to math. Most notably, there’s no analogue to practicing scales, no fundamental-level thing that you can put large amounts of time into practicing and get general returns to mathematical ability: there’s just proofs, and once you can tell a valid proof from an invalid proof, there’s almost no value that comes from studying set theory proofs very closely. There’s certainly an aesthetic sense that can be refined, but studying whatever proofs happen to be at to slightly above your current level is probably the most helpful (like in flow), if it’s too easy, you’re just bored and learn nothing (there’s nothing there to learn), and if it’s too hard, you get frustrated and still learn nothing (since you’re unable to understand what’s going on).)
“With a bit of work”, used in a math text, means that a mathematically literate reader who has understood everything up until the phrase’s invocation should be able to come up with the result themselves, that it will require no real new insight; “with a bit of work, it can be shown that, for every positive integer n, (1 + 1/n)^n < e < (1 + 1/n)^(n+1)”. This does not preclude needing to do several pages of scratch work or spending a few minutes trying various approaches until you figure out one that works; the tendency is for understatement. Related, most math texts will often leave proofs that require no novel insights or weird tricks as exercises for the reader. In Linear Algebra Done Right, for instance, Axler will often state a theorem followed by “as you should verify”, which should require some writing on the reader’s part; he explicitly spells this out in the preface, but this is standard in every math text I’ve read (and I only bother reading the best ones). You cannot read mathematics like a novel; as Axler notes, it can often take over an hour to work through a single page of text.
Most math books present definitions, state theorems, and give proofs. In general, you definitely want to spend a bit of time pondering definitions; notice why they’re correct/how the match your intuition, and seeing why other definitions weren’t used. When you come to a theorem, you should always take a few minutes to try to prove it before reading the book’s proof. If you succeed, you’ll probably learn something about how to write proofs better by comparing what you have to what the book has, and if you fail, you’ll be better acquainted with the problem and thus have more of an idea as to why the book’s doing what it’s doing; it’s just an empirical result (which I read ages ago and cannot find) that you’ll understand a theorem better by trying to prove it yourself, successful or not. It’s also good practice. There’s some room for Anki (I make cards for definitions—word on front, definition on back—and theorems—for which reviews consist of outlining enough of a proof that I’m confident I could write it out fully if I so desired to) but I spend the vast majority of my time trying to prove things.
Your comment made me think, and I’ll look up some of the recommendations. I like the analogy with musicians and also the part where you talked about how the analogy breaks down.
However, I’d like to offer a bit of a different perspective to the original poster on this part of what you said.
Your advice is good, given this assumption. But this assumption may or may not be true. Given that the post says:
I think there’s the possibility that the original poster would be interested in computational mathematics.
Also, it’s not either or. It’s a false dichotomy. Learning both is possible and useful. You likely know this already, and perhaps the original poster does as well, but since the original poster is not familiar with much math, I thought I’d point that out in case it’s something that wasn’t obvious. It’s hard to tell, writing on the computer and imagining a person at the other end.
If the word “computational” is being used to mean following instructions by rote without really understanding why, or doing the same thing over and over with no creativity or insight, then it does not seem to be what the original poster is looking for. However, if it is used to mean creatively understanding real world problems, and formulating them well enough into math that computer algorithms can help give insights about them, then I didn’t see anything in the post that would make me warn them to steer clear of it.
There are whole fields of human endeavor that use math and include the term “computational” and I wouldn’t want the original poster to miss out on them because of not realizing that the word may mean something else in a different context, or to think that it’s something that professional mathematicians or scientists or engineers don’t do much. Some mathematicians do proofs most of the time, but others spend time on computation, or even proofs about computation.
Fields include computational fluid dynamics, computational biology, computational geometry...the list goes on.
Speaking of words meaning different things in different contexts, that’s one thing that tripped me up when I was first learning some engineering and math beyond high school. When I read more advanced books, I knew when I was looking at an unfamiliar word that I had to look it up, but I hadn’t realized that some words that I already was familiar with had been redefined to mean something else, given the context, or that the notation had symbols that meant one thing in one context and another thing in another context. For example, vertical bars on either side of something could mean “the absolute value of” or it could mean “the determinant of this matrix”, and “normal forces” meant “forces perpendicular to the contact surface”. Textbooks are generally terribly written and often leave out a lot.
In other words, the jargon can be sneaky and sound exactly like words that you already know. It’s part of why mathematical books seem so nonsensical to outsiders.
Excellent points; “rigorous” would have been a better choice. I haven’t yet had the time to study any computational fields, but I’m assuming the ones you list aren’t built on the “fuzzy notions, and hand-waving” that Tao talks about.
I should also add I don’t necessarily agree 100% with every in Lockhart’s Lament; I do think, however, that he does an excellent job of identifying problems in how secondary school math is taught and does a better job than I could of contrasting “follow the instructions” math with “real” math to a lay person.
Interesting. One of my recurring themes is that mathematics and statistics are very different things and require different kind of brains/thinking—people good at one will rarely be good at the other, too.
If you define mathematics as being about proofs (and not so much about computation), the distinction becomes more pronounced: statistics isn’t about proofs at all, it’s about dealing with uncertainty. There are certainly areas where they touch (e.g. proving that certain estimators have certain properties), but at their core, mathematics and statistics are not similar at all.
I’m skeptical that there is any such distinction. “Computational” math is near-worthless in the absence of a proof of correctness for what you’re computing. Even statistics relies on such proofs, though sometimes these can only provide approximate results. (For instance, maximum-likelihood methods are precisely optimal under the simplifying assumption of uniform priors and a 0-1 loss function.)
Statistical tools rely on such proofs.
Statistics is an applied science, similar to engineering. It has to deal with the messy world where you might need to draw conclusions from a small data set of uncertain provenance where some outliers might be data entry mistakes (or maybe not), you are uncertain of the shape of the distributions you are dealing with, have a sneaking suspicion that the underlying process is not stable in time, etc. etc. None of the nice assumptions underlying nice proofs of optimality apply. You still need to analyse this data set.
Except for all that pesky theoretical statistics.
Math people can have that :-) It is, basically, applied math, anyway.
Except it’s not math. Disciplines are socially constructed, statistics is what statisticians do. Applied math is what applied math people do. There are lots of very theoretical stats departments. I think you are having a similar confusion people have sometimes about computer science and programming.
I think if you say stuff like “well, all those people who publish in Annals of Statistics are applied math people” I am not sure what you are really saying. There is some intersection w/ applied math, ML, etc., but theoretical stats has their own set of big ideas that define the field and give it character.
I don’t think I do? I am well aware of the famous Dijkstra’s quote.
As you mentioned, statistics is what statisticians do. Most statisticians don’t work in academia. I don’t doubt there are a lot of theory-heavy stats deparments, just like there are a lot of physics-heavy engineering departments.
Going up one meta-level, I’m less interested in what discipline boundaries have the social reality constructed, and more interested in feeling for the joint in the underlying territory.
Not sure why we are having this discussion. Statistics is a discipline with certain themes, like “intelligently using data for conclusions we want.” These themes are sufficient to give it its own character, and make it both an applied and theoretical discipline. I don’t think you are a statistician, right? Why are you talking about this?
Statistics is as much an applied discipline as physics.
Because I’m interested in the subject. Do you have objections?
You can post about whatever you want. I have objections if you start mischaracterizing what statistics is about for fun on the internet. Fun on the internet is great, being snarky on the internet is ok, misleading people is not.
edit: In fact, you can view this whole recent “data science” thing that statisticians are so worried about as a reaction to the statistics discipline becoming too theoretical and divorced from actual data analysis problems. [This is a controversial opinion, I don’t think I share it, quite.]
I don’t believe I’m mischaracterizing statistics. My original point was an observation that, in my experience, good mathematicians and good statisticians are different. Their brains work differently. To use an imperfect analogy, good C programmers and good Lisp programmers are also quite different. You just need to think in a very different manner in Lisp compared to C (and vice versa). That, of course, doesn’t mean that a C programmer can’t be passably good in Lisp.
I understand that in the academia statistics departments usually focus on theoretical statistics. That’s fine—I don’t in particular care about “official” discipline boundaries. For my purposes I would like to draw a divide between theoretical statistics and, let’s call it practical statistics. I find it useful to classify theoretical statistics as applied math, and practical statistics as something different from that.
Data science is somewhat different from traditional statistics, but I’m not sure its distinction lies on the theoretical-practical divide. As a crude approximation, I’d say that traditional statistics is mostly concerned with extracting precise and “provable” information out of small data sets, and data science tends to drown in data and so loves non-parametric models and ML in particular.
Ok, I am not interested in wasting more time on this, all I am saying is:
This is misleading. Theoretical statistics is not applied math, either. I think you don’t know what you are talking about, re: this subject.
So we disagree :-)
Well, this is a matter of degree. There is a reason we use these tools in the first place. A good statistician must be quite aware of the underlying assumptions of each tool, if only so that they can switch to something else when warranted. (For instance, use “robust” methods which try to identify and appropriately discount outliers.)
Well, of course.
Heh. The word “appropriately” is a tricky one. There is a large variety of robust methods which use different ways of discounting outliers, naturally with different results. The statistician will need to figure out what’s “appropriate” in this particular case and proofs don’t help here.