Macroeconomics, The Lucas Critique, Microfoundations, and Modeling in General
I posted this comment in reply to a post by David Henderson over at econlog, but first some context.
Mathew Yglesias writes:
...From an outside perspective, what seems to be going on is that economists have unearthed an extremely fruitful paradigm for investigation of micro issues. This has been good for them, and enhanced the prestige of the discipline. No such fruitful paradigm has actually emerged for investigation of macro issues. So the decision has been made to somewhat arbitrarily impose the view that macro models must be grounded in micro foundations. Thus, the productive progressive research program of microeconomics can “infect” the more troubled field of macro with its prestige...
...But as a methodological matter, it seems deeply unsound. As a general principle for investigating the world, we normally deem it desirable, but not at all necessary, that researchers exploring a particular field of inquiry find ways to “reduce” what they’re doing to a lower level....
...Trying to enhance models with better information about psychology isn’t against the rules, but it’s not required either. What’s required is that the models do useful work.
So why should it be that “in the current regime, if [macro models] are not meticulously constructed from “micro foundations,” they aren’t allowed to be considered”?
To which a commenter replies:
While I’m the first to acknowledge that the current macro research paradigm has given us little useful analysis, there is a standard answer to Matt’s question.
You can start by going to Wikipedia and reading about the Lucas Critique...
I won’t reproduce the whole thing, click through to the comment to see a decent summary of the Lucas Critique if you aren’t aware of it already.
Henderson, over at econlog, replies:
...Second, the demand for microfoundations, or at least the supply of them, goes back more than 10 years before the date Arnold claims. It goes back at least to Milton Friedman’s A Theory of the Consumption Function...,
...Third, interestingly, Milton Friedman himself would probably agree with Yglesias about the idea that there’s not necessarily a need for micro foundations for macro. In a 1996 interview published in Brian Snowdon and Howard R. Vane, Modern Macroeconomics, Edward Elgar, 2005, Friedman said:
It is less important for macroeconomic models to have choice-theoretic microfoundations than it is for them to have empirical implications that can be subjected to refutation.In saying this, Friedman was going back to his positivist roots, which he laid out at length in his classic 1953 essay, “The Methodology of Positive Economics,” published in Essays in Positive Economics. There was always an interesting tension in Friedman’s work, which he never resolved, between reasoning as a clear-headed economist about people acting based on incentives and constraints and “positivistly” black-boxing it and trying to come up with predictions.
And without further adieu, here’s my respone:
I don’t think there is a tension in Friedman’s thinking at all. We want our models to predict, otherwise, what are they good for? If a model doesn’t have microfoundations and predicts well, so what? Microeconomic models, as Yglesias notes, don’t have “microfoundations” in psychology. Yet they still do a pretty good job under a variety of circumstances.
The reason microfoundations are necessary is instrumental to predictive power. It turns out that models with microfoundations [tend to] predict better than models without them, in all fields. This is why chemists tend to build their models up from physics and why biologists to the same with chemistry. It also turns out that microeconomics can be quite successful without microfoundations, while macroeconomics is far less successful.
The Lucas critique tells us many of the reasons why macroeconomics needs microfoundations. The main reason microeconomics does not is that it is built from pretty solid intuitions about how individuals act. They aren’t perfect, of course, but as a first approximation (instrumental) rationality does a pretty good job of describing human behavior in many cases. And the only way we really do know this is by going out and testing our models which, behavioral economics notwithstanding, have done well.
There’s a trade off between accuracy and analytical tractability while modeling. More microfoundations will tend to increase accuracy, but imagine if we started with physics for every single scientific problem. The computations would be insane, so instead we simplify things and ignore some of the microfoundations. It is called a model for a reason, after all.
Friedman is right: if microfoundations do end up being important, models without them will do poorly relative models that use them (and thus the relevant trade off will manifest itself). Note that this is precisely what happened in the history of macro, and kudos to Friedman for realizing that microfoundations were important before the rest of the field. I suspect that macro models [don’t] do well in an absolute sense though, but that is another matter entirely.
ack… I should edit my comments better before posting them (notice the use of square brackets).
edit: some minor formatting
Chemistry via physics doesn’t really work without quantum mechanics. This is why chemistry didn’t exist until the last 150 years or so, everything before that was just alchemy. Am I getting this right?
And of course, the field has also been slowed down by the nature of calculating the wave function, which is intractable for anything but very simple systems. That’s why biology couldn’t exist until the invention of supercomputers in the 1970s enabled researchers to approximate the wave functions of organic molecules.
There seems to be a confusion between, and I’ll borrow LW terminology here, epistimic reductionism and instrumental reductionism. If you reject the former, Daniel Dennet will jump out of the supplies closet and kick you in the face. Epistemic reductionism is physics students deriving classical equations from quantum mechanics as an exercise.
Instrumental reductionism, on the other hand, is only one tool for actually getting things done and, in practice, many situations involving large, complex systems are better tackled by simpler models that selectively ignore some or all of the “microfoundations” in favor of observing high-level patterns. It is nice, but not required, to be able to prove the accuracy of the high-level rules in terms of low-level laws.
Consider, for instance, Conway’s Game of Life. If you have an otherwise empty field containing just a glider gun), do you need to model the state of the field by iterating the entire thing? No, just look at the period of the gun and the speed of the gliders and you can predict the state with much simpler calculations.
TL;DR version: Don’t care about microfoundations. Care about tractability and accuracy. Just because a system can be reduced does not mean a reductionist analysis is useful.
I don’t think you are disagreeing with me at all. You pretty much sum my point up with this:
This sums it up even better:
The only major thing I can think of that you might disagree with is that microfoundations tend to increase accuracy.
FWIW I wasn’t talking about epistemic reductionism at all.
I don’t think so, either, except possibly to quibble about an analogy. Sorry if I wasn’t clear on that. I was more attempting to discuss what seems to be the confusion you were responding to and make a more general point, that worrying about high-level models reducing to low-level models is potentially misguided if it’s just reductionism for its own sake.
You realize, of course, that no-one’s going to get to the TL;DR version unless they didn’t think it was too long, and already read it? ;)
I sometimes skim lengthy posts. If something, particularly near the end, catches my eye I go back and reread the whole thing properly. Your mileage may vary.
Fair enough. I guess I was wondering whether it would work better to put it at the top. For some reason I got distracted and forgot to actually mention that in my previous comment. Sorry.
Reductionism isn’t necessary for every day use. But it is preferred as it will tell you when things might change. Trends/formulae derived from simpler well tested concepts are more stable.
Take for example the long term trend of world economic growth. It is a good day to day expectation that the world economy is growing, however If we could reduce it to a function of population growth, energy supply growth and new technology we would have a better idea of when we might stop getting the fairly reliable historic growth.
Compare and contrast:
von Mises’ intuitions
Kahnemann and Tversky’s evidence
Which is a better foundation for microeconomics?
If you replace von Mises’ intuitions with the particular intuitions neoclassical economics is built from ( to the extent that they differ), then it depends on the particular question you are trying to answer. Market activity is approximated reasonably well by the rationality assumption in a variety of cases. Kahnemann and Tversky’s evidence that humans are irrational is certainly strong, but in many cases trying to incorporate this reduces tractability to such an extent that it isn’t worth it, or at least we don’t know how to incorporate it. A good heuristic is to use rationality for long-run phenomena and when possible, use irrationality for the short run.
Even more so, Gary Becker proved in 1962 that you don’t need rationality for many of the basic principles of microeconomics to hold. All you need is for each person to have a maximum budget—a noncontroversial assumption if there ever was one.
Many different kinds of non-utility-maximizing behavior and maximizing behavior across nonstandard preferences (sticky actions, bounded rationality, etc) still produce the key results.
This is true, but it’s also worth emphasising that in many cases, we do have reasonably tractable micro models that incorporate irrationality [ETA: I should instead have said nonstandard preferences; not all of these are necessarily irrational], and they do get used. (I’m not suggesting you disagree with this, I just don’t want to give casual non-economist readers the impression that the discipline as a whole blithely ignores such things.)
How exactly do you use irrationality?
You don’t, you use a decision model that incorporates bias.