Blog post

The critique of modern macro

What’s at stake: This week’s conversation on the blogosphere focused on whether the modern macroeconomic tools developed in the stable macroeconomic e

Publishing date
20 April 2015

What’s at stake: This week’s conversation on the blogosphere focused on whether the modern macroeconomic tools developed in the stable macroeconomic environment of the Great Moderation actually failed us when we entered the Great Recession.

The state of macro redux

Olivier Blanchard writes that the techniques we use affect our thinking in deep and not always conscious ways. This was very much the case in macroeconomics in the decades preceding the crisis. The techniques were best suited to a worldview in which economic fluctuations occurred but were regular, and essentially self-correcting. The problem is that we came to believe that this was indeed the way the world worked. These techniques however made sense only under a vision in which economic fluctuations were regular enough.

Olivier Blanchard writes that we thought of the economy as roughly linear, constantly subject to different shocks, constantly fluctuating, but naturally returning to its steady state over time. The notion that small shocks could have large adverse effects, or could result in long and persistent slumps, was not perceived as a major issue.

the modern models have at least two questionable features.

Wolfgang Munchau  writes that the modern models have at least two questionable features. The first is the assumption of a single macroeconomic equilibrium — the notion that the economy reverts to its previous position or path after a shock. The second is linearity — the idea of a straight-line relationship between events. But if you want to understand why the economy did well before 2007, why there was a break in 2008 and why the path of economic output never returned to its previous trajectory, one would require models that incorporate the notion of non-linearity, and even chaos.

David Andolfatto writes that the dynamic general equilibrium (DGE) approach is the dominant methodology in macro today because of its power to organize thinking in a logically consistent manner, its ability to generate reasonable conditional forecasts, as well as its great flexibility. The DGE approach provides an explicit description of what motivates and constrains individual actors. This property of the model reflects a belief that individuals are incentivized--in particular, they are likely to respond in more or less predictable ways to changes in the economic environment to protect or further their interests. It provides an explicit description of government policy. Finally, the DGE approach insists that the policies adopted by private and public sector actors are in some sense "consistent" with each other. Notions of consistency are imposed through the use of solution concepts, like competitive equilibrium, Nash equilibrium, search and bargaining equilibrium, etc. Among other things, consistency requires that economic outcomes respect resource feasibility and budget constraints.

Non-linearity and multiplicity

Maybe the complaint is simply that economists don’t do enough nonlinear analysis

Paul Krugman writes that ranting about the need for new models is not helpful. First, claims that mainstream economists never think about, and/or lack the tools to consider, nonlinear stuff and multiple equilibria and all that are just wrong. Maybe the complaint is simply that economists don’t do enough nonlinear analysis. Bu the problem with nonlinear models is that it’s quite easy, if you’re moderately good at pushing symbols around, to write down models where nonlinearity leads to funny stuff. Showing that this bears any relationship to things that happen in the real world is, however, a lot harder, so nonlinear modeling all too easily turns into a game with no rules — tennis without a net.

Tony Yates writes that the 2000 Benhabib and Schmitt-Grohe’s paper on the ‘perils of Taylor rules’ is one example of a paper [but there are hundreds] that embraced both nonlinearity and multiplicity.  This is solved non-linearly, in the presence of the zero bound.  And it explains how there are 2 steady states.  One with inflation at target.  And one with nominal interest rates perpetually trapped at the zero bound.

Financial intermediation in macro models

Noah Smith writes that the favorite macro models didn't have any finance in them, with the possible lone exception of the Bernanke-Gertler "financial accelerator" models. That was a big mistake, especially since the Great Depression and crises in other countries (e.g. Japan) should have suggested that financial crashes were a big deal. To their credit, though, mainstream macroeconomists have been hastening to correct the mistake.

Frances Coppola writes that central banks are now “adding” the financial sector to existing DSGE models: but this does not begin to address the essential non-linearity of a monetary economy whose heart is a financial system that is not occasionally but NORMALLY far from equilibrium.

David Andolfatto writes that while one might legitimately observe that New Keynesian DSGE models or RBC models largely downplay the role of financial frictions and that practioners should therefore not have relied so heavily on them, it would not be correct to say that DGE theory cannot account for financial crises. A large and lively literature on financial crises existed well before 2007. If central bank economists were not paying too much attention to that branch of the literature, it is at most an indictment on them and not on the body of tools that were available to address the questions that needed to be answered.

Asking the right questions and discrimination against alternative models

Olivier Blanchard writes that we all knew that there were “dark corners”—situations in which the economy could badly malfunction. But we thought we were far away from those corners, and could for the most part ignore them. We now know that we were much closer to those dark corners than we thought—and the corners were even darker than we had thought too.

The major debates in macroeconomics had nothing to do with the possibility of bubbles causing a financial system meltdown

Mark Thoma writes that all the tools in the world are useless if we lack the imagination needed to build the right models. Models are built to answer specific questions. The problem wasn't the tools that macroeconomists use, it was the questions that we asked. The major debates in macroeconomics had nothing to do with the possibility of bubbles causing a financial system meltdown. That's not to say that there weren't models here and there that touched upon these questions, but the main focus of macroeconomic research was elsewhere.

Noah Smith writes that if you have models for everything, you don't actually have any models at all. Without a way of choosing between models, your near-infinite stable of models turns into one big giant mega-model that can give anyone any results he wants.  Now, technically, you could choose between models based on the plausibility of the assumptions. But three things make this impossible in practice. First, the need for tractability means that the assumptions in almost any modern macro model will be utterly implausible to anyone who has not spent decades in a monastery high in the Himalayas training himself in the art of self-deception. Second, the assumptions are so stylized that it takes a huge amount of talent just to figure out what they are. And third, with a near-infinite catalog of models to comb through, there's just no way to compare any significant number of them all at once.

The US-Europe divergence in thinking after 2010

In Europe, by contrast, policy makers were ready and eager to throw textbook economics out the window

Paul Krugman writes that it’s wrong to claim, as many do, that policy failed because economic theory didn’t provide the guidance policy makers needed. In reality, theory provided excellent guidance, if only policy makers had been willing to listen. What stands out from around 2010 onward is the huge divergence in thinking that emerged between the United States and Europe. In America, the White House and the Federal Reserve mainly stayed faithful to standard Keynesian economics. In Europe, by contrast, policy makers were ready and eager to throw textbook economics out the window in favor of new approaches that were innovative, exciting and completely wrong.

About the authors

  • Jérémie Cohen-Setton

    Jérémie Cohen-Setton is a Research Fellow at the Peterson Institute for International Economics. Jérémie received his PhD in Economics from U.C. Berkeley and worked previously with Goldman Sachs Global Economic Research, HM Treasury, and Bruegel. At Bruegel, he was Research Assistant to Director Jean Pisani-Ferry and President Mario Monti. He also shaped and developed the Bruegel Economic Blogs Review.

Related content

Blog post

The fiscal stance puzzle

What’s at stake: In a low r-star environment, fiscal policy should be accommodative at the global level. Instead, even in countries with current accou

Jérémie Cohen-Setton
Blog post

The state of macro redux

What’s at stake: In 2008, Olivier Blanchard argued in a paper called “the state of macro” that a largely shared vision of fluctuations and of methodol

Jérémie Cohen-Setton