Just wanted to point out that many contributors to the site are afflicted by what I call “theoritis”, a propensity to advance a theory despite being green amateurs in the subject matter, and then have the temerity to argue about it with the (clearly-non-stupid) experts in the field. The field in question can be psychology, neuroscience, physics, math, computer science, you name it.
It is rare that people consider a reverse situation first: what would I think of an amateur who argues with me in the area of my competence? For example, if you are an auto mechanic, would you take seriously someone who tells you how to diagnose and fix car issues without ever having done any repairs first? If not, why would you argue about quantum mechanics with a physicist, with a decision theorist about utility functions,or with a mathematician about first-order logic, unless that’s your area of expertise? Of course, looking back it what I post about, I am no exception.
OK, I cannot bring myself to add philosophy to the list of “don’t argue with the experts, learn from them” topics, but maybe it’s because I don’t know anything about philosophy.
In practice the two are, in my line of work, very difficult to separate. The what is almost always the how. But both, out of practical necessity. When the client insists on a particular implementation, that’s the implementation you go with.
That’s part of it, but no, that’s not what I’m referring to. Client necessities are client necessities.
“Encryption and file delivery need to be in separate process flows” would be closer. (This sounds high-level, but in the scripting language I do most of my work in, both of these are atomic operations.)
A relevant distinction that you are not making is between the questions that are well-understood in the expert’s area and the questions that are merely associated with the expert’s area (or are expert’s own inventions), where we have no particular reason to expect that the expert’s position on the topic is determined by its truth and not by some accident of epistemic misfortune. The expert will probably know the content of their position very well, but won’t necessarily correctly understand the motivation for that position. (On the other hand, someone sufficiently unfamiliar with the area might be unable to say anything meaningful about the question.)
Good point. Also, even when questions are well-understood by domain experts it still can be very effective to argue about them, since this usually leads to the clearest arguments and explanations. This is especially true since the social norms on this site highly value truth-seeking, epistemic hygiene (including basic intellectual honesty) and scholarship: in many other venues (including some blogs), anti-expertise attitudes do lead to bad outcomes, but this does not seem to apply much on LW.
(...) a propensity to advance a theory despite being green amateurs in the subject matter, and then have the temerity to argue about it with the (clearly-non-stupid) experts in the field.
Not exactly a green amateur, so how could he have set that norm? EDIT: Retracted, you answered in another comment.
I think philosophy does belong to the list if you are arguing some matters of philosophy but not others. There is a common field to all mathematics-heavy disciplines, that is mathematics, with huge overlaps, and there’s no reason why for example a physicist couldn’t correctly critique bad mathematics of a philosopher, even though most non philosophers or amateur philosophers really should learn and not argue as a philosopher is a bit of an expert in mathematics.
OK, I cannot bring myself to add philosophy to the list of “don’t argue with the experts, learn from them” topics, but maybe it’s because I don’t know anything about philosophy.
I find that an odd statement. Why can’t you assume by default that arguing with an expert in X is bad for all X?
For some reason, theoritis is much worse with regard to philosophy than just about anything else. Amateurs hardly ever argue with brain surgeons or particle physicists. I think part of the reason for that is that brain surgeons and particle physicists have manifest practical skills that others don’t have. The “skill” of philosophy consists of stating opinions and defending them, which everyone can do to some extent. The amateurs are like people who think you can write (well, at a a professional level) because you can type.
I find that an odd statement. Why can’t you assume by default that arguing with an expert in X is bad for all X?
By default, yes. Let me try to articulate my perception of the difference between philosophers and other experts. When I talk to a mathematician, or a physicist, or a computer scientist, I can almost immediately see that their level in their discipline is way above mine, because they bring up a standard argument/calculation/proof which refutes my home-made ideas, and then extend those ideas to a direction I never considered and show which of them are any good. Talking to an expert willing to take you seriously is generally a humbling experience. You see the depth of their knowledge and realize that arguing with them instead of listening is a poor strategy. By the way, I noticed that I sometimes also do that to people when I talk about my area of expertise.
Now, when I listen to a mainstream philosophical argument, I don’t feel humbled at all (with one or two exceptions), instead I want to scream “why are you arguing about definitions? Especially the definitions you didn’t even bother formalizing?!?!” or “why do you rely on a premise you find “intuitive” or “obvious”, given that it’s rather not obvious to others?” or “why do you gleefully strawman someone else’s argument instead of trying to salvage it?”. The exceptions are generally in the areas which can hardly be considered philosophy, they are usually a part of mathematical logic, or computer science, or physics, or psychology, which makes them (gasp!) testable, something classical philosophers seem to shy away from. I don’t normally get the feeling of awe and respect when listening to a philosopher. They can sure cite a multitude of sources and positions and reproduce some ancient arguments, but many of these arguments look as outdated as Aristotle’s ideas about physics, and so only of historical interest.
Again, I’m no expert in the matters of philosophy, so my perspective might be completely wrong, but that’s the explanation why I did not add philosophers to the list of experts in my original comment.
Now, when I listen to a mainstream philosophical argument, I don’t feel humbled at all (with one or two exceptions), instead I want to scream “why are you arguing about definitions?
Becuase phils. deal with abstract concepts, not things you can point at, and because many phil. problems are caused by inconsistent definitions, as in the when-a-tree falls problem.
Especially the definitions you didn’t even bother formalizing?!?!”
Phils can and do stipulate.
or “why do you rely on a premise you find “intuitive” or “obvious”, given that it’s rather not obvious to others?”
Are there fields where people don’t rely on intuitions?
or “why do you gleefully strawman someone else’s argument instead of trying to salvage it?”.
Come on, Luke has a series of posts taking a shit on the entire discipline of philosophy. Luke is not an expert on philosophy. EY says he isn’t happy with do(.) based causality while getting basic terminology in the field wrong, etc. EY is not an expert on causal inference. If you disagree with Larry Wasserman on a subject in stats, chances are it is you who is confused. etc. etc. Communication and scholarship norms here are just awful.
If you want to see how academic disagreements ought to play out, stroll on over to Scott’s blog.
edit: To respond to the grandparent: I think the answer is adopting mainstream academic norms.
shminux explicitly excluded philosophy, and I wasn’t aware of the other two examples you gave. Can you link to them so I can take a look? (ETA: Never mind, I think I found them. ETA2: Actually I’m not sure. Re Wasserman, are you referring to this?)
I couldn’t agree more. Mainstream academia is set of rationality skills and a very case hardened one. Adding something extra, like cognitive science might be good, but LW omits a lot of the academic virtues—not blowing off about things you don’t know, making an attempt to answer objections, modesty, etc.
PS: Tenure is a great rationality-promoting institution because...left as an exercise to the reader.
EY says he isn’t happy with do(.) based causality while getting basic terminology in the field wrong
Just for clarity, could you link to where EY does this? Also, it’s fairly well known in statistics that econometricians are unhappy with causal networks and do(.), because causal networks cannot directly account for feedback-like or cyclic phenomena, which are quite ubiquitous in econometric data (think supply and demand factors co-determining price and quantity, or the influence of expectations) - causal networks have to be acyclic. So there is a genuine controversy here which is reflected in the literature.
This is precisely what I mean. Well known by whom? Not by me!
Causal networks can easily encode cycles (in fact in two separate ways—via unrolling the cycle a la dynamic Bayesian network, or via non-recursive, or cyclic, structural equation models). Pearl’s first picture of an SEM, Figure 1.5 in his book, shows a cyclic causal diagram representing supply and demand. See google preview here: http://bayes.cs.ucla.edu/BOOK-2K/
Here’s a paper from as early as 1995 by Spirtes (there have been many more since then) talking about cyclic causal models:
When you say causal networks cannot account for feedback or cyclic phenomena, what exactly do you mean? Do you have any references for econometricians abandoning do(.) in favor of something else? Or any reference for the controversy? Note that SEM (which is likely what most econometricians use due to their preference for instrumental variable methods) are a special case of do(.) models.
As for EY, he was confused about the difference between a causal model and a Bayesian network. This would be sort of comparable to going up to Scott and saying “it seems incontrovertible to me that MWI is the correct interpretation fo quantum mechanics. By the way, I got the definition of the Hamiltonian wrong.” One may be right, but the worry is right for the wrong reasons.
OK, I managed to find the comment by Eliezer that you’re probably referring to, here. But what Eliezer says in that comment is that do(.)-based causality cannot be physically fundamental, which sounds right to me. And Pearl agrees with this, insofar as he states (in Causality) that the correspondence between physical causation (Pearl references the requirement that causes be in the past light cone of their effects; albeit presumably we should also include the principle of locality/”no action at a distance”) and statistical causality analysis is a bit of a mystery, and may say more about the way that people build models of the world and talk about them than anything more fundamental.
As for the confusion between Bayesian networks and causal graphs, Pearl deals with that in his book. Even before causal graphs were formally described, a lot of the interest in Bayesian networks (which are represented as directed graphs) was due to folks wanting to do causal analysis on them, if only informally. And indeed, if all we’re interested in is the correlation structure, then we’re not limited to Bayesian networks: we can use other kinds of graphical models, some of which have better properties (such as Markov graphs).
I am suspending judgement about the feedbacks issue for now, even though I still think it’s important. The point is that you’d need to make the case that causal diagrams can account in a reasonably straightforward way for all relevant uses of SEM (including not just explicit feedback but also equilibrium relationships more generally). Unless this is clearly shown, I don’t think it’s right to call do(.)-based methods a generalization of SEM.
Structural equation models (SEMs) are a special (linear/Gaussian) case of the non-parametric structural model (which uses do(.), or potential outcomes). This is not even an argument we can have, it’s standard math in the field. I don’t know where you learned that this is not the case, but whatever that source, it is wrong.
It’s fairly easy to verify: all non-parametric structural models do is replace the linear mechanism function by an arbitrary function, and the Gaussian noise term by an arbitrary noise term. It’s fairly easy to derive that causal regression coefficients in a SEM are simply interventional expected value contrasts on the difference scale.
So if we have:
y = ax + epsilon, then
a = E[y | do(x = 1)] - E[y | do(x = 0)]
One can also think of regression coefficients as partial derivatives of the interventional mean with respect to the intervened variable:
a = dE[y|do(x)]/dx
Cyclic causal models do not require either linearity or Gaussianity, although these assumptions make certain things easier.
Part of the reason I post here is I love talking about this stuff, and while I think I can learn much from the lesswrong community, I also can contribute my expertise where appropriate. What is disheartening is arguing with non-experts about settled issues. This reminds me of this episode where Judea asked me to change something on the Wikipedia Bayesian network article, and I got into an edit war with a resident Wikipedia edit camper. I am sure he was not an expert, because he was reverting a wrong statement (and had more time than me..) I adjusted my overall opinion of Wikipedia quality based on that :(.
You would think so, but I don’t think that’s true. Think about the legions of cranks trying to create perpetual motion machines, or settle the P/NP question, etc. etc. Thermodynamics is fairly settled, the difficulty of the P/NP question is fairly settled. Crankery is an easy attractor, apparently.
Note: I am not calling anyone in this thread a crank, merely responding to the general point that argument is evidence of an unsettled area. It’s true, but the evidence is surprisingly weak.
No, I meant that if someone gets settled stuff wrong, that’s usually due to sloppiness, and said sloppiness is an utter horror in any less settled area. It’s like repeatedly falling off bicycle head first with the training wheels on. Without training wheels its only worse.
I agree that this is true of structural equation models, taken in a fairly narrow sense. However, econometricians commonly generalize these to simultaneous equation models, which include equations where one simply asserts an algebraic equation involving variables, with no one variable having a privileged status of being “determined”, or an “outcome” of others. This means that do(.) cannot carry over to such models in a straightforward way. And yes, this is standard practice in econometrics when modeling equilibrium, feasibility constraints and the like.
To the extent that constraints are simply constraints and not a result of causal structure, the model representing them is partly non-causal (so do(.) or some other representation of causation is irrelevant for such constraints). To the extent that constraints represent some consequence of graphical causal structure I am not aware of a single example where a potential outcome model is not appropriate. Do you have an example in mind?
In some sense if you have constraints that represent consequence of causality, such as feedback, and there is no story relating them to interventions/generative mechanisms, then I am not sure in what sense the model is causal. I am not saying it is not possible, but the burden of proof is on whoever proposed the model to clearly explain how causality works in it. There is a lot of confusion in economics and sometimes even in stats about causality (Judea is fairly unhappy with incoherence that many economics textbooks display when discussing causation, actually).
OK, I cannot bring myself to add philosophy to the list of “don’t argue with the experts, learn from them” topics, but maybe it’s because I don’t know anything about philosophy.
Could this be because we have fewer philosophy experts (although there are a few notable ones) than science experts?
Just wanted to point out that many contributors to the site are afflicted by what I call “theoritis”, a propensity to advance a theory despite being green amateurs in the subject matter, and then have the temerity to argue about it with the (clearly-non-stupid) experts in the field. The field in question can be psychology, neuroscience, physics, math, computer science, you name it.
It is rare that people consider a reverse situation first: what would I think of an amateur who argues with me in the area of my competence? For example, if you are an auto mechanic, would you take seriously someone who tells you how to diagnose and fix car issues without ever having done any repairs first? If not, why would you argue about quantum mechanics with a physicist, with a decision theorist about utility functions,or with a mathematician about first-order logic, unless that’s your area of expertise? Of course, looking back it what I post about, I am no exception.
OK, I cannot bring myself to add philosophy to the list of “don’t argue with the experts, learn from them” topics, but maybe it’s because I don’t know anything about philosophy.
I take non-programmers seriously about programming all of the time. That’s pretty much in the job description.
Just because I’m not stupid doesn’t mean I’m not wrong. Indeed, it takes some serious intelligence to be wrong in the worst kind of ways.
About implementation, or about what to implement?
In practice the two are, in my line of work, very difficult to separate. The what is almost always the how. But both, out of practical necessity. When the client insists on a particular implementation, that’s the implementation you go with.
I would assume that’s high-level—“use Oracle, not MySQL”
That’s part of it, but no, that’s not what I’m referring to. Client necessities are client necessities.
“Encryption and file delivery need to be in separate process flows” would be closer. (This sounds high-level, but in the scripting language I do most of my work in, both of these are atomic operations.)
A relevant distinction that you are not making is between the questions that are well-understood in the expert’s area and the questions that are merely associated with the expert’s area (or are expert’s own inventions), where we have no particular reason to expect that the expert’s position on the topic is determined by its truth and not by some accident of epistemic misfortune. The expert will probably know the content of their position very well, but won’t necessarily correctly understand the motivation for that position. (On the other hand, someone sufficiently unfamiliar with the area might be unable to say anything meaningful about the question.)
Good point. Also, even when questions are well-understood by domain experts it still can be very effective to argue about them, since this usually leads to the clearest arguments and explanations. This is especially true since the social norms on this site highly value truth-seeking, epistemic hygiene (including basic intellectual honesty) and scholarship: in many other venues (including some blogs), anti-expertise attitudes do lead to bad outcomes, but this does not seem to apply much on LW.
Good post. It’s EY’s fault, imo. He set the norms.
Not exactly a green amateur, so how could he have set that norm? EDIT: Retracted, you answered in another comment.
I think philosophy does belong to the list if you are arguing some matters of philosophy but not others. There is a common field to all mathematics-heavy disciplines, that is mathematics, with huge overlaps, and there’s no reason why for example a physicist couldn’t correctly critique bad mathematics of a philosopher, even though most non philosophers or amateur philosophers really should learn and not argue as a philosopher is a bit of an expert in mathematics.
I find that an odd statement. Why can’t you assume by default that arguing with an expert in X is bad for all X?
For some reason, theoritis is much worse with regard to philosophy than just about anything else. Amateurs hardly ever argue with brain surgeons or particle physicists. I think part of the reason for that is that brain surgeons and particle physicists have manifest practical skills that others don’t have. The “skill” of philosophy consists of stating opinions and defending them, which everyone can do to some extent. The amateurs are like people who think you can write (well, at a a professional level) because you can type.
By default, yes. Let me try to articulate my perception of the difference between philosophers and other experts. When I talk to a mathematician, or a physicist, or a computer scientist, I can almost immediately see that their level in their discipline is way above mine, because they bring up a standard argument/calculation/proof which refutes my home-made ideas, and then extend those ideas to a direction I never considered and show which of them are any good. Talking to an expert willing to take you seriously is generally a humbling experience. You see the depth of their knowledge and realize that arguing with them instead of listening is a poor strategy. By the way, I noticed that I sometimes also do that to people when I talk about my area of expertise.
Now, when I listen to a mainstream philosophical argument, I don’t feel humbled at all (with one or two exceptions), instead I want to scream “why are you arguing about definitions? Especially the definitions you didn’t even bother formalizing?!?!” or “why do you rely on a premise you find “intuitive” or “obvious”, given that it’s rather not obvious to others?” or “why do you gleefully strawman someone else’s argument instead of trying to salvage it?”. The exceptions are generally in the areas which can hardly be considered philosophy, they are usually a part of mathematical logic, or computer science, or physics, or psychology, which makes them (gasp!) testable, something classical philosophers seem to shy away from. I don’t normally get the feeling of awe and respect when listening to a philosopher. They can sure cite a multitude of sources and positions and reproduce some ancient arguments, but many of these arguments look as outdated as Aristotle’s ideas about physics, and so only of historical interest.
Again, I’m no expert in the matters of philosophy, so my perspective might be completely wrong, but that’s the explanation why I did not add philosophers to the list of experts in my original comment.
Becuase phils. deal with abstract concepts, not things you can point at, and because many phil. problems are caused by inconsistent definitions, as in the when-a-tree falls problem.
Phils can and do stipulate.
Are there fields where people don’t rely on intuitions?
Maybe they can’t see how.
Want to give some examples? I don’t seem to recall seeing a lot of this myself.
Come on, Luke has a series of posts taking a shit on the entire discipline of philosophy. Luke is not an expert on philosophy. EY says he isn’t happy with do(.) based causality while getting basic terminology in the field wrong, etc. EY is not an expert on causal inference. If you disagree with Larry Wasserman on a subject in stats, chances are it is you who is confused. etc. etc. Communication and scholarship norms here are just awful.
If you want to see how academic disagreements ought to play out, stroll on over to Scott’s blog.
edit: To respond to the grandparent: I think the answer is adopting mainstream academic norms.
shminux explicitly excluded philosophy, and I wasn’t aware of the other two examples you gave. Can you link to them so I can take a look? (ETA: Never mind, I think I found them. ETA2: Actually I’m not sure. Re Wasserman, are you referring to this?)
I couldn’t agree more. Mainstream academia is set of rationality skills and a very case hardened one. Adding something extra, like cognitive science might be good, but LW omits a lot of the academic virtues—not blowing off about things you don’t know, making an attempt to answer objections, modesty, etc.
PS: Tenure is a great rationality-promoting institution because...left as an exercise to the reader.
Just for clarity, could you link to where EY does this? Also, it’s fairly well known in statistics that econometricians are unhappy with causal networks and do(.), because causal networks cannot directly account for feedback-like or cyclic phenomena, which are quite ubiquitous in econometric data (think supply and demand factors co-determining price and quantity, or the influence of expectations) - causal networks have to be acyclic. So there is a genuine controversy here which is reflected in the literature.
This is precisely what I mean. Well known by whom? Not by me!
Causal networks can easily encode cycles (in fact in two separate ways—via unrolling the cycle a la dynamic Bayesian network, or via non-recursive, or cyclic, structural equation models). Pearl’s first picture of an SEM, Figure 1.5 in his book, shows a cyclic causal diagram representing supply and demand. See google preview here: http://bayes.cs.ucla.edu/BOOK-2K/
Here’s a paper from as early as 1995 by Spirtes (there have been many more since then) talking about cyclic causal models:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.1489
Here’s a logical axiomatization of counterfactual causality in cyclic models (2000):
http://www.jair.org/papers/paper648.html
When you say causal networks cannot account for feedback or cyclic phenomena, what exactly do you mean? Do you have any references for econometricians abandoning do(.) in favor of something else? Or any reference for the controversy? Note that SEM (which is likely what most econometricians use due to their preference for instrumental variable methods) are a special case of do(.) models.
As for EY, he was confused about the difference between a causal model and a Bayesian network. This would be sort of comparable to going up to Scott and saying “it seems incontrovertible to me that MWI is the correct interpretation fo quantum mechanics. By the way, I got the definition of the Hamiltonian wrong.” One may be right, but the worry is right for the wrong reasons.
OK, I managed to find the comment by Eliezer that you’re probably referring to, here. But what Eliezer says in that comment is that do(.)-based causality cannot be physically fundamental, which sounds right to me. And Pearl agrees with this, insofar as he states (in Causality) that the correspondence between physical causation (Pearl references the requirement that causes be in the past light cone of their effects; albeit presumably we should also include the principle of locality/”no action at a distance”) and statistical causality analysis is a bit of a mystery, and may say more about the way that people build models of the world and talk about them than anything more fundamental.
As for the confusion between Bayesian networks and causal graphs, Pearl deals with that in his book. Even before causal graphs were formally described, a lot of the interest in Bayesian networks (which are represented as directed graphs) was due to folks wanting to do causal analysis on them, if only informally. And indeed, if all we’re interested in is the correlation structure, then we’re not limited to Bayesian networks: we can use other kinds of graphical models, some of which have better properties (such as Markov graphs).
I am suspending judgement about the feedbacks issue for now, even though I still think it’s important. The point is that you’d need to make the case that causal diagrams can account in a reasonably straightforward way for all relevant uses of SEM (including not just explicit feedback but also equilibrium relationships more generally). Unless this is clearly shown, I don’t think it’s right to call do(.)-based methods a generalization of SEM.
Structural equation models (SEMs) are a special (linear/Gaussian) case of the non-parametric structural model (which uses do(.), or potential outcomes). This is not even an argument we can have, it’s standard math in the field. I don’t know where you learned that this is not the case, but whatever that source, it is wrong.
It’s fairly easy to verify: all non-parametric structural models do is replace the linear mechanism function by an arbitrary function, and the Gaussian noise term by an arbitrary noise term. It’s fairly easy to derive that causal regression coefficients in a SEM are simply interventional expected value contrasts on the difference scale.
So if we have:
y = ax + epsilon, then
a = E[y | do(x = 1)] - E[y | do(x = 0)]
One can also think of regression coefficients as partial derivatives of the interventional mean with respect to the intervened variable:
a = dE[y|do(x)]/dx
Cyclic causal models do not require either linearity or Gaussianity, although these assumptions make certain things easier.
Part of the reason I post here is I love talking about this stuff, and while I think I can learn much from the lesswrong community, I also can contribute my expertise where appropriate. What is disheartening is arguing with non-experts about settled issues. This reminds me of this episode where Judea asked me to change something on the Wikipedia Bayesian network article, and I got into an edit war with a resident Wikipedia edit camper. I am sure he was not an expert, because he was reverting a wrong statement (and had more time than me..) I adjusted my overall opinion of Wikipedia quality based on that :(.
Arguing with experts on settled issue is a symptom of sloppiness which would be particularly prominent in non-settled issues, though.
You would think so, but I don’t think that’s true. Think about the legions of cranks trying to create perpetual motion machines, or settle the P/NP question, etc. etc. Thermodynamics is fairly settled, the difficulty of the P/NP question is fairly settled. Crankery is an easy attractor, apparently.
Note: I am not calling anyone in this thread a crank, merely responding to the general point that argument is evidence of an unsettled area. It’s true, but the evidence is surprisingly weak.
No, I meant that if someone gets settled stuff wrong, that’s usually due to sloppiness, and said sloppiness is an utter horror in any less settled area. It’s like repeatedly falling off bicycle head first with the training wheels on. Without training wheels its only worse.
I agree that this is true of structural equation models, taken in a fairly narrow sense. However, econometricians commonly generalize these to simultaneous equation models, which include equations where one simply asserts an algebraic equation involving variables, with no one variable having a privileged status of being “determined”, or an “outcome” of others. This means that do(.) cannot carry over to such models in a straightforward way. And yes, this is standard practice in econometrics when modeling equilibrium, feasibility constraints and the like.
This is probably a good read also:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.29.1408
To the extent that constraints are simply constraints and not a result of causal structure, the model representing them is partly non-causal (so do(.) or some other representation of causation is irrelevant for such constraints). To the extent that constraints represent some consequence of graphical causal structure I am not aware of a single example where a potential outcome model is not appropriate. Do you have an example in mind?
In some sense if you have constraints that represent consequence of causality, such as feedback, and there is no story relating them to interventions/generative mechanisms, then I am not sure in what sense the model is causal. I am not saying it is not possible, but the burden of proof is on whoever proposed the model to clearly explain how causality works in it. There is a lot of confusion in economics and sometimes even in stats about causality (Judea is fairly unhappy with incoherence that many economics textbooks display when discussing causation, actually).
Could this be because we have fewer philosophy experts (although there are a few notable ones) than science experts?