The new heavyweight macro critics


I got tired of lambasting macroeconomics a while ago, and the "macro wars" mostly died down in the blogosphere around when the recovery from the Great Recession kicked in. But recently, there have been a number of respected macroeconomists posting big, comprehensive criticisms of the way academic macro gets done. Some of these criticisms are more forceful than anything we bloggers blogged about back in the day! Anyway, I thought I'd link to a couple here.

First, there's Paul Romer's latest, "The Trouble With Macroeconomics". The title is an analogy to Lee Smolin's book "The Trouble With Physics". Romer basically says that macro (meaning business-cycle theory) has become like the critics' harshest depictions of string theory - a community of believers, dogmatically following the ideas of revered elders and ignoring the data. The elders he singles out are Bob Lucas, Ed Prescott, and Tom Sargent.

Romer says that it's obvious that monetary policy affects the real economy, because of the Volcker recessions in the early 80s, but that macro theorists have largely ignored this fact and continued to make models in which monetary policy is ineffectual. He says that modern DSGE models are no better than old pre-Lucas Critique simultaneous-equation models, because they still take lots of assumptions to identify the models, only now the assumptions are hidden instead of explicit. Romer points to distributional assumptions, calibration, and tight Bayesian priors as ways of hiding assumptions in modern DSGE models. He cites an interesting 2009 paper by Canova and Sala that tries to take DSGE model estimation seriously and finds (unsurprisingly) that identification is pretty difficult.

As a solution, Romer suggests chucking formal modeling entirely and going with more general, vague but flexible ideas about policy and the macroeconomy, supported by simple natural experiments and economic history. 

Romer's harshest zinger (and we all love harsh zingers) is this:
In response to the observation that the shocks [in DSGE models] are imaginary, a standard defense invokes Milton Friedman’s (1953) methodological assertion from unnamed authority that "the more significant the theory, the more unrealistic the assumptions (p.14)." More recently, "all models are false" seems to have become the universal hand-wave for dismissing any fact that does not conform to the model that is the current favorite.  
The noncommittal relationship with the truth revealed by these methodological evasions...goes so far beyond post-modern irony that it deserves its own label. I suggest "post-real."
Ouch. He also calls various typical DSGE model elements names like "phlogiston", "aether", and "caloric". Fun stuff. (Though I do think he's too harsh on string theory, which often is just a kind of math that physicists do to keep themselves busy, and has no danger of hurting anyone, unlike macro theory.)

Meanwhile, a few weeks earlier, Narayana Kocherlakota wrote a post called "On the Puzzling Prevalence of Puzzles". The basic point was that since macro data is fairly sparse, macroeconomists should have lots of competing models that all do an equally good job of matching the data. But instead, macroeconomists pick a single model they like, and if data fails to fit the model they call it a "puzzle". He writes:
To an outsider or newcomer, macroeconomics would seem like a field that is haunted by its lack of data...In the absence of that data, it would seem like we would be hard put to distinguish among a host of theories...[I]t would seem like macroeconomists should be plagued by underidentification... 
But, in fact, expert macroeconomists know that the field is actually plagued by failures to fit the data – that is, by overidentification. 
Why is the novice so wrong? The answer is the role of a priori restrictions in macroeconomic theory... 
The mistake that the novice made is to think that the macroeconomist would rely on data alone to build up his/her theory or model.  The expert knows how to build up theory from a priori restrictions that are accepted by a large number of scholars...[I]t’s a little disturbing how little empirical work underlies some of those agreed-upon theory-driven restrictions – see p. 711 of Lucas (JMCB, 1980) for a highly influential example of what I mean.
In fact, Kocherlakota and Romer are complaining about much the same thing: the overuse of unrealistic assumptions. Basically, they say that macroeconomists, as a group, have gotten into the habit of assuming stuff that just isn't true. In fact, this is what the Canova and Sala paper says too, in a much more technical and polite way:
Observational equivalence, partial and weak identification problems are widespread and typically produced by an ill-behaved mapping between the structural parameters and the coefficients of the solution.
That just means that the model elements aren't actually real things.

(This critique resonates with me. From day 1, the thing that always annoyed me about macro was how people made excuses for assumptions that were either unverifiable or just flatly contradictory to micro data. The usual excuse was the "pool player analogy" - the idea that the pieces of a model don't have to match micro data as long as the resulting model matches macro data. I'm not sure that's how Milton Friedman wanted his metaphor to be used, but that seems to be the way it does get used. And when the models didn't match macro data either, the excuse was "all models are wrong," which really just seems to be a way of saying "the modeler gets to choose which macro facts are used to validate his theory". It seemed that to a large extent, macro modelers were just allowed to do whatever they wanted, as long as their papers won some kind of behind-the-scenes popularity contest. But I digress.)

So what seems to unite the new heavyweight macro critics is an emphasis on realism. Basically, these people are challenging the idea, very common in econ theory, that models shouldn't worry about being realistic. (Paul Pfleiderer is another economist who has recently made a similar complaint, though not in the context of macro.) They're not saying that economists need 100% perfect realism - that's the kind of thing you only get in physics, if anywhere. As Paul Krugman and Dani Rodrik have emphasized, even the people advocating for more realism acknowledge that there's some ideal middle ground. But if Romer, Kocherlakota, etc. are to be believed, macroeconomists aren't currently close to that optimal interior solution.


Updates

Olivier Blanchard is a bet less forceful, but he's definitely also one of the new heavyweight critics. Among his problems with DSGE models, at least as they're currently done, are 1. "unappealing" assumptions that are "at odds with what we know about consumers and firms", and 2. "unconvincing" estimation methods, including calibration and tight Bayesian priors. Sounds pretty similar to Romer.

Meanwhile, Kocherlakota responds to Romer. He agrees with Romer's criticism of unrealistic macro assumptions, but he dismisses the idea that Lucas, Prescott, and Sargent are personally responsible for the problems. Instead, he says it's about the incentives in the research community. He writes:
We [macroeconomists] tend to view research as being the process of posing a question and delivering a pretty precise answer to that question...The research agenda that I believe we need is very different. It’s hugely messy work.  We need...to build a more evidence-based modeling of financial institutions.  We need...to learn more about how people actually form expectations.  We need [to use] firm-based information about residual demand functions to learn more about product market structure.  At the same time, we need to be a lot more flexible in our thinking about models and theory, so that they can be firmly grounded in this improved empirical understanding.
Kocherlakota says that this isn't a "sociological" issue, but I think most people would call it that. Since journals and top researchers get to decide what constitutes "good" research, it seems to me that to get the changes in focus Kocherlakota wants, a sociological change is exactly what would be required.

Kocherlakota now has another post describing how he thinks macro ought to be done. Basically, he thinks researchers - as a whole, not just on their own! - should start with toy models to facilitate thinking, then gather data based on what the toy models say is important, then build formal "serious" models from the ground up to match that data. He contrasts this with the current approach of tweaking existing models.

My question is: Who is going to enforce this change? If a few established researchers start doing things the way Kocherlakota wants, they'll certainly still get published (because they're famous old people), but will the young folks follow? How likely is it that established researchers en masse are going to switch to doing things this way, and demanding that young researchers do the same, and using their leverage as reviewers, editors, and PhD advisers to make that happen? This doesn't seem like the kind of change that can be brought about by a few young smart rebels forcing everyone else to recognize the value of their approach - the existing approach, which Kocherlakota dislikes, already succeeds in getting publication and prestige, so the rebels would simply coexist alongside the old approach, rather than overthrowing it. How could this cultural change be put into effect?

Also: Romer now has a follow-up to his original post, defending his original post against the critics. This part stood out to me as particularly persuasive:
The whine I hear regularly from the post-real crowd is that “it is really, really hard to do research on macro so you shouldn’t criticize any of our models unless you can produce one that is better.” 
This is just post-real Calvinball used as a shield from criticism. Imagine someone saying to a mathematician who finds an error in a theorem that is false,  “you can’t criticize the proof until you come up with valid proof.” Or try this one on and see how it feels: “You can’t criticize the claim that vaccines cause autism unless you can come up with a better explanation for autism.”
Sounds right to me. The old like that "it takes a theory to kill a theory" just seems wrong to me. Sometimes all it takes is evidence.

Comments

Popular posts from this blog

Remembering Thomas Keating

Econ theory as signaling?

Robert Lucas in biology class