Blog post

Microfoundations in Macroeconomics

What’s at stake:The role of aggregate or ad-hoc models for policy discussions in an age where journal papers in macro theory are always microfounded

Publishing date
09 March 2012

What’s at stake: The role of aggregate or ad-hoc models for policy discussions in an age where journal papers in macro theory are always microfounded DSGE was brought to the forefront more than 2 years ago by Paul Krugman’s provocative essay `How Did Economists Get It So Wrong?’ (see here for a review). Since then, an interesting discussion – in the sense that it is not a discussion between those who do not understand the language of modern macroeconomics and those who do – has been going in the blogosphere on the importance of microfoundations for macroeconomic analysis. In a previous post, we outlined recent extensions of the basic IS-LM framework and pointed to a specific strand (modeling financial frictions into New Keynesian models) of a burgeoning literature: models with heterogeneous agents. We use the flurry of debate this week on the blogosphere to provide more background on these heterogeneous agents models and on other alternative approaches to the representative agent framework (behavioral macro models and agent based models).

Microfounded and some other useful aggregate models

Mark Thoma points out that the reason that many of us looked backward for a model to help us understand the present crisis is that none of the current models were capable of explaining what we were going through. The New Keynesians model was built to capture "ordinary" business cycles driven by price sluggishness of the sort that can be captured by the Calvo model of price rigidity. But the standard versions of this model do not explain how financial collapse of the type we just witnessed come about, hence they have little to say about what to do about them.

Simon Wren-Lewis argues that some aggregate models contain critical features that can be derived from a number of different microfoundations. In that situation, it is natural to want to work with these aggregate models. We could even say that they are more useful, because they have a generality that would be missing if we focused on one particular microfoundation. Suppose there is not just one, but perhaps a variety of particular worlds which would lead to this set of aggregate macro relationships. Furthermore, suppose that more than one of these particular worlds was a reasonable representation of reality. In these circumstances, it would seem sensible to go straight to the aggregate model, and ignore microfoundations

In a follow-up post Simon Wren-Lewis argues that the microfoundations purist view is a mistake because it confuses ‘currently has no clear microfoundations’ with ‘cannot ever be microfounded’. Developing new microfounded macro models is hard. It is hard because these models need to be internally consistent. If we think that, say, consumption in the real world shows more inertia than in the baseline intertemporal model, we cannot just add some lags into the aggregate consumption function. Instead we need to think about what microeconomic phenomena might generate that inertia. We need to rework all relevant optimization problems adding in this new ingredient. Many other aggregate relationships besides the consumption function could change as a result. When we do this, we might find that although our new idea does the trick for consumption, it leads to implausible behavior elsewhere, and so we need to go back to the drawing board. It is very important to do all this, but it takes time. So using aggregate (or useful, or ad hoc) models should be respected if there is empirical evidence supporting the ad hoc aggregate relationship, and if the implications of that relationship could be important. In these circumstances, it would be a mistake for academic analysis to have to wait for the microfoundations work to be done.

The Lucas Critique and the Representative Agent framework

Noahpinion points that the Phillips Curve is the famous example of why aggregate relationships might not be useful without understanding the microfoundations. That doesn't make aggregate-only models useless, but it should make people cautious about using them. The usual answer is that "microfoundations make models immune to the Lucas Critique." The idea is that the rules of individual behavior don't change when policy changes, so basing our models purely on the rules of individual behavior will allow us to predict the effects of government policies. Actually, it’s not clear this really works. For example, most microfounded models rely on utility functions with constant parameters - these are the "tastes" that Bob Lucas and other founders of modern macro believed to be fundamental and unchanging. But I'd be willing to bet that different macro policies can change people's risk aversion. If that's the case, then using microfoundations doesn't really answer the Lucas Critique.

The use of representative agents in macroeconomics has something to do with the recent soul searching among macroeconomists and the critique against the profession.

In a review of Michael Woodford's major and very influential monetary theory textbook (Interest and Prices), Kevin Hoover argued that if Keynesians were stigmatized for dealing only in aggregates, the representative-agent is nothing else but an aggregate in microeconomic drag. He recalls that most important – but widely neglected – results of general equilibrium theory in the 1950s and 1960s shown that the representative-agent’s utility function cannot be thought of as ranking the outcomes of policy in a manner that deeply reflects those of individual agents.

AlphaSources provides some useful background about the origins of the Representative Agent. It first appeared in the context of Alfred Marshall’s Principles of Economics in the form of the representative firm and Marshall originally conjured this entity in the context of constructing a supply curve for the industry. After a devastating critique by, among others, John Maynard Keynes and Lionel Robbins the idea of the representative agent was put to rest in the first part of the 20th century. According to Hartley (1996) the first use of representative agents in a post Marshall perspective has its origins in the period in which neo-classical economics was reaching its zenith. Concretely, Lucas and Rapping (1970) is cited as the first contribution using a representative agent detailing the theory of intertemporal labor supply which is a core assumption of most real business cycle models (see D. Romer, 2006, ch. 4).

Microfoundations or "Microfoundations"

Paul Krugman argues that when making comparisons between economics and physical science we should keep in mind that what we call “microfoundations” are not like physical laws. Heck, they’re not even true. Maximizing consumers are just a metaphor, possibly useful in making sense of behavior, but possibly not. The metaphors we use for microfoundations have no claim to be regarded as representing a higher order of truth than the ad hoc aggregate metaphors we use in IS-LM or whatever. Noahpinion argues that macroeconomists to have basically done one of two things: either A) gone right on using aggregate models, while writing down some "microfoundations" to please journal editors, or B) drawn policy recommendations directly from incorrect models of individual behavior.

Kevin Grier, professor at the University of Oklahoma, points that we don't even have very good micro foundations for money! We just put it in the utility function or arbitrarily assume a "cash in advance" constraint. Amazingly though, Central Banks in the Western world have spent a lot of money and economist-hours trying to construct DSGE models that are actually useful for forecasting. This effort has largely led to the de facto abandonment of micro-foundations. In the quest to make the models "work" we often either choose whatever micro-foundation that gives the best forecast regardless of micro evidence about whether or not it is accurate, or we just add ad hoc, non-micro-founded "frictions" to create more inertia. Or we just add more and more "shocks" to the model and say things like, "much of the variation in X is caused by shocks to the markup".

Robert Waldmann points that there is a tension between the two pillars of Modern Macroeconomic Methodology: Milton Friedman's methodology of positive economics and the Lucas critique. A nickel version of Friedman’s methodology of positive economics start with models can be useful even if they are not true (even if they are false by definition). This is universally agreed. This implies that we shouldn't treat models as hypotheses to be tested, so we are not necessarily interested in every testable implication of the model. Instead we should care about the implications of the model for the variables, which matter to us.

Beyond the RA framework: Models with heterogeneous agents

Douglas Clement reviews for the Minneapolis Fed a flourishing literature where researchers explore the promise of economic models that allow for human variation. Including different tastes or characteristics in models have led to a reformulation of many previous results derived with simple representative agents. While standard models tend to find small impact of recessions, heterogeneous agents highlight distributive effects and thus point out much larger overall effects on unemployment and wealth. The consequences of inflation have also be revised: expected inflation has a negative impact on the poor because they hold more of their wealth in cash than do the rich, but, on the other hand, it creates large losses for older, wealthy households because they hold more bonds than others. Deflation could have the opposite consequences. 

Simon Wren-Lewis points to a couple interesting questions raised by the role of microfoundations in macroeconomics: Can the microfoundations approach embrace all kinds of heterogeneity, or will such models lose their attractiveness in their complexity? Does sticking with simple, representative agent macro impart some kind of bias? Does a microfoundations approach discourage investigation of the more ‘difficult’ but more important issues? Might both these questions suggest a link between too simple a micro based view and a failure to understand what was going on before the financial crash? Are alternatives to microfoundations modeling methodologically coherent? Is empirical evidence ever going to be strong and clear enough to trump internal consistency?

Jonathan Heathcote, Kjetil Storesletten and Gianluca Violante have, for example, applied these heterogeneous models to investigate the impact of rising wages inequality on labor supply and consumption. Thomas Piketty, Emmanuel Saez and Stefanie Stantcheva have also Included heterogeneous agents in normative models of taxation leading to a reconsideration of important results in that field.

Beyond the RA framework: The agent based approach to macroeconomics

Doyne Farmer at Santa Fe has become one of the leading proponents of the agent based model approach to macroeconomics. An agent-based model is a computerized simulation of a number of decision-makers (agents) and institutions, which interact through prescribed rules. Contrary to standard dynamic economic models, these models do not rely on the assumption that the economy will move towards a predetermined equilibrium state and they do not assume an a priori form of rationality. Behaviors are modeled according to what is observed: researchers thus need a tremendous amount of data in order to identify robust patterns. The models allow for non-equilibrium states and non-linearities:  thus they can generate easily non-market clearing phenomena and endogenous crisis.

In an article in Nature in 2009, Farmer and Foley claimed that these models could include financial interactions in a much more complex and realistic way than usual models. Farmer and his team are now developing an agent-based model of the housing market to mimic the current financial crisis. The team collects data on actual people to calibrate a rich model with millions of interacting agents. This is what they call a bottom-up approach to macroeconomics: see here from a more detailed presentation of that approach by INET.

In order to convince most economists, agent based models need to show that the mechanisms modeled in the complex interactions are nevertheless still clear and intuitive and all but a new black box. Richard Serlin points, in particular, that aggregation is a huge challenge for microfounded models, since complex systems often have chaotic properties.

Beyond the RA framework: IKE and Behavioral Macro

Kevin Hoover has recently written what he calls "an Econ 1 (Principles) version of the Imperfect Knowledge Economics" (IKE) developed by Roman Frydman and Michael D. Goldberg which aims to provide an alternative to the representative agent utility. The IKE sees investors as adopting various strategies for forming expectations of future prices. These strategies are not unique, so that there is a distribution of strategies and investors may alter their strategies from time to time. Roman Frydman and Michael D. Goldberg have especially applied their new approach to asset pricing and financial markets. They complain that current models are only “economics of magical thinking”. Behavioral economics has shown that market participants do not act like conventional economists would predict “rational individuals” to act. But according to Frydman and Goldberg it would be also wrong to interpret these empirical findings to mean that many market participants are irrational, prone to emotion, or ignore economic fundamentals for other reasons. People can be rational in different ways depending on the context and information available to them.

Roger Guesnerie has a rather similar view: developing new approaches to rationality and expectations is the promising way that economics should follow in order to build macroeconomics models in the post crisis era.

Paul De Grauwe recently wrote a textbook on Behavioral Macroeconomics. Contrary to mainstream top-down models in which agents are capable of understanding the whole picture and use this superior information to determine their optimal plans, the models used in this book are bottom-up models in which all agents experience cognitive limitations. As a result, these agents are only capable of understanding and using small bits of information. Agents use simple rules of behavior. These models are not devoid of rationality. Agents in these models behave rationally in that they are willing to learn from their mistakes. Importantly, these models produce a radically different macroeconomic dynamics than RA models.

About the authors

  • Jérémie Cohen-Setton

    Jérémie Cohen-Setton is a Research Fellow at the Peterson Institute for International Economics. Jérémie received his PhD in Economics from U.C. Berkeley and worked previously with Goldman Sachs Global Economic Research, HM Treasury, and Bruegel. At Bruegel, he was Research Assistant to Director Jean Pisani-Ferry and President Mario Monti. He also shaped and developed the Bruegel Economic Blogs Review.

Related content

Blog post

The fiscal stance puzzle

What’s at stake: In a low r-star environment, fiscal policy should be accommodative at the global level. Instead, even in countries with current accou

Jérémie Cohen-Setton
Blog post

The state of macro redux

What’s at stake: In 2008, Olivier Blanchard argued in a paper called “the state of macro” that a largely shared vision of fluctuations and of methodol

Jérémie Cohen-Setton