Blog post

Microdata for macroeconomics

What’s at stake: Over the last few years, there has been a significant increase in the use of microdata to address macroeconomic questions. Microecono

Publishing date
02 March 2012

What’s at stake: Over the last few years, there has been a significant increase in the use of microdata to address macroeconomic questions. Microeconomic data in the form of state level data or even zip-code level data has been used to address traditional macroeconomic questions such as the size of policy multipliers and to test alternative macroeconomic theories. An important advantage of this approach for investigating policy multipliers is the ability to control for other contemporary macro shocks. Another advantage of the use of microeconomic data in the form of zip code level data is that it can isolate channels much more effectively than traditional aggregate datasets, and hence more easily test alternative explanations of a same phenomenon. The difficulty of the approach generally lies in the translation of these cross-section estimates of policy multipliers or channels into estimates for the aggregate economy.

Microdata and policy multipliers: better identification, harder interpretation

 

The debate on the size of fiscal multipliers illustrates the advantages and drawbacks of using microdata to investigate macroeconomic questions.

As noted by Valerie Ramey in her recent literature review on the size of fiscal multipliers, the empirical literature based on aggregate data has generated a wide range of estimates for fiscal multipliers. Estimates range from less than one (or even close to zero) to almost 3 as in Alan Aeurbach and Yuriy Goronichenko. The basic problem is that, even if the fiscal shocks are well identified, in the sense that they are driven by current or expected changes in economic activity, they, nonetheless, often suffer from omitted-variable bias. Valerie Ramey and Matthew Shapiro, for example, identify military build-ups through a narrative approach and find a multiplier below one. But as pointed by Christina Romer – in a recent speech reviewing the empirical evidence on the impact of fiscal policy – other developments affecting output, such as tax increases to pay for the wars, or some other disruptions, such as rationing, came along with these major military actions. In addition, empirical approaches based on aggregate data often have a hard time controlling for the reaction of the central bank.

The empirical literature based on disaggregated data has significantly reduced that range of these estimates. The main advantage of using disaggregated data – for example data for US states – is that these common macroeconomic factors get washed out in the empirical analysis. An increasing number of papers have, thus, relied on this approach of using cross-sections or panels of states to estimate the effects of an increase in government spending. These papers typically find multipliers of about 1.5 (an exception is this recent paper by Cohen, Coval and Malloy). Gabriel Chodorow-Reich and al. use, for example, a state’s pre-recession Medicaid spending level to instrument for ARRA state fiscal relief and find a multiplier in that range. So does, Daniel Shoag who use changes in state spending caused by excess returns to state pension fund returns. Emi Nakamura and Jon Steinsson also find a similar number using state-specific sensitivity to aggregate changes in military spending as an instrument. Using changes in federal spending on states caused by updates of population estimates based on the Census as an instrument, Juan Carlos Suarez Serrato and Philippe Wingender also find a number in that range.

In an important paper, Emi Nakamura and Jon Steinsson sketch a framework to translate these cross-section estimates into estimates of the aggregate impact of spending changes. The framework is imperfect but help clarify a couple of important points. In particular, it outlines that the number recovered from cross-sectional studies is not akin to the aggregate multiplier at the ZLB. Since the nominal interest rate is fixed across regions, one might think that the number obtained would be akin to the closed economy aggregate multiplier when nominal interest rates are fixed at the zero lower bound, in which case the New Keynesian model generates large multipliers (as in Gauti Eggertsson or Christiano, Eichenbaum and Rebelo). But this is not the case. This simple intuition ignores a crucial dynamic aspect of price responses in a monetary union like the United States. Since transitory demand shocks do not lead to permanent changes in relative prices across regions and the exchange rate is fixed within the monetary union, any increase in prices in the short run in one region relative to the other must eventually be reversed in the long run. This implies that even though relative short-term real interest rates fall in response to government spending shocks, relative long-term real interest rates don’t (in contrast to the zero lower bound setting). It is the fall in long-term real interest rates that generates a high multiplier in the zero lower bound setting.

Lessons on the Great Recession from microeconomic data

 

In a recent AER paper, Atif Mian and Amir Sufi point out that, in contrast to our peers in previous crises, we are fortunate to have access to large-scale microeconomic data sets and advancements in computational capacity. These advantages allow for a more rigorous analysis of the current recession and therefore a more informed understanding of its origins, propagation, and consequences. The authors review empirical contributions based on micro-level analysis to illustrate how this methodology provides us with important clues to understand the origins of the crisis, the link between credit and asset prices, the feedback effect from asset prices to the real economy, and the role of household leverage in explaining the downturn.

A telling example of that approach concerns the factors behind the rise in leverage in the years ahead of the crisis. From a policy perspective, it is important to understand whether a rise in leverage was driven by demand-side productivity shocks or supply-side financial factors. If the credit expansion was due to positive productivity or technology shocks, the subsequent crisis represents an “unlucky” event associated with realized shocks, which were not as positive as anticipated. If such a view were correct, there would be no role for public policy: credit booms result from productivity-driven shifts in the demand for credit and should therefore be left alone. A supply-driven rise in leverage may, on the other hand, not be innocuous. For example, if leverage growth is driven by risk shifting supply-side incentives such as regulatory arbitrage or expectations of government bailouts, then there may be a role for intervention to realign incentives. In a 2009 QJE paper, the authors show that contrary to the predictions of a productivity-based credit expansion hypothesis, zip codes that saw the largest increase in home purchase mortgage originations from 2002 to 2005 experienced relative declines in income. More broadly, they show that the correlation between mortgage growth and income growth is negative from 2002 to 2005 while the correlation is positive in all other periods since 1990.

In a recent working paper, Atif Mian and Amir Sufi also investigate competing explanations for the extent of job losses since the beginning of the crisis. The authors derive cross-sectional testable predictions associated with three classes of models: models that emphasize the importance of uncertainty (in particular policy uncertainty), models that emphasize structural unemployment related to construction, and models that emphasize the aggregate demand channel following a shock to HH balance sheets. In particular, the aggregate demand channel for unemployment predicts that employment losses in the non-tradable sector are higher in high leverage U.S. counties that were most severely impacted by the balance sheet shock, while losses in the tradable sector are distributed uniformly across all counties. The authors find exactly this pattern from 2007 to 2009, and do not find patterns in the data that are consistent with the two other explanations.

How microdata can help policymaking

 

In a previous issue of the blogs review, we emphasized that the availability of high frequency data can improve policymaking by improving nowcasting. Atif Mian and Amir Sufi point that the widespread availability of microeconomic data has greatly enhanced our ability to understand the fundamental driving forces behind macroeconomic fluctuations and credit cycles. Micro-level data is now widely available for key variables of interest such as bank loans, house prices, consumer borrowing, spending, and defaults. These data are updated at quarterly frequency or higher, making them highly useful – not only to researchers investigating the past – but also for policy work. As an example, the authors points that their analysis of the increase in leverage from their 2009 QJE paper could have been done at the time of these ongoing developments.

*Bruegel Economic Blogs Review is an information service that surveys external blogs. It does not survey Bruegel’s own publications, nor does it include comments by Bruegel authors.

About the authors

  • Jérémie Cohen-Setton

    Jérémie Cohen-Setton is a Research Fellow at the Peterson Institute for International Economics. Jérémie received his PhD in Economics from U.C. Berkeley and worked previously with Goldman Sachs Global Economic Research, HM Treasury, and Bruegel. At Bruegel, he was Research Assistant to Director Jean Pisani-Ferry and President Mario Monti. He also shaped and developed the Bruegel Economic Blogs Review.

Related content

Blog post

The fiscal stance puzzle

What’s at stake: In a low r-star environment, fiscal policy should be accommodative at the global level. Instead, even in countries with current accou

Jérémie Cohen-Setton
Blog post

The state of macro redux

What’s at stake: In 2008, Olivier Blanchard argued in a paper called “the state of macro” that a largely shared vision of fluctuations and of methodol

Jérémie Cohen-Setton