The Sticky Price Hypothesis: A Critique

I was, in my younger days, a professional drywall finisher (we called ourselves “tapers”). I worked on commercial sites and residential sites, union and non-union jobs, by wage rate and by piece rate. Yep, I did it all (and no, I am not available to do your basement).

When the recession hit the Vancouver construction sector in 1981, I could no longer find work and, after a brief spell of unemployment, I went back to school. Well, looking back now, I’d say that there was plenty of work available, but none that would have compensated me enough to forego my next best opportunity. Looking back, I am struck at how I never believed, even as a kid, that my transition from employment to unemployment to out of the labor force had anything to do with a sticky nominal wage.

And I believe that Keynes would have agreed with my assessment. That is, a careful reading of the General Theory reveals that Keynes never assumed sticky prices, except briefly, as an expositional device, and in the form of a fixed nominal wage. Indeed, later on in the GT, he explains why the economy might function much better if wages were in fact sticky. Evidently, the mechanism he had in mind has little, if anything, to do with the one emphasized in a standard New Keynesian (NK) model.

A defining characteristic of the NK paradigm is the existence of price-setting agents who find it too costly to adjust (product or labor) prices at high-frequency, and who are somehow behaviorally committed to deliver goods (products or labor) in a market at historically outdated terms of trade. What is it that motivates this modeling device?

Well, the evidence, for one thing. Clearly, at least some and perhaps most nominal prices/wages are “sticky” in the sense that they appear insensitive to high-frequency changes in macroeconomic conditions. Beginning with the pioneering work of Mark Bils and Peter Klenow, we now know a lot more about the nature of this stickiness at the microeconomic level. In their 2002 study, for example, Bils and Klenow find that the median consumer item changes price every 4.3 months. Service prices exhibit more stickiness than goods prices; the former change every 7.8 months, while the latter change every 3.2 months. Another important distinction appears to be with respect to raw versus processed goods. The former change price every 1.6 months, while the latter change price every 5.7 months. (See here for a summary of their results.)

Alright then, the data show sticky prices and the NK model has sticky prices. So what is there to argue? In fact, something very important: Do not confuse measurement with theory. The sticky price hypothesis is a theory; i.e., a proposed mechanism designed to interpret the data. And while the theory arguably has some empirical support, it is not as strong as one is generally led to believe. Sticky price models calibrated to match the observed average duration of price changes (just over one quarter) imply relatively benign consequences. Things get uglier when trying to match model predictions to microdata; see Klenow 2003. These considerations have led some economist to explore other avenues of “stickiness;” e.g., the “sticky information” models posited by Mankiw and Reis (QJE 2002).

There may be more than one way to interpret the available evidence. Figuring out which interpretation is best (most accurate) is important because different mechanisms frequently lead to very different policy conclusions. I have my own favored hypothesis that I’ll share with you shortly. But before I do so, I want to describe the way I think many people organize their thoughts on this matter.

I imagine that many begin by conjuring up, in their mind’s eye, a pair of Marshallian scissors (supply and demand). If markets work well, then prices should be determined by a market clearing condition (scissor intersection). Because the economy is constantly changing (subject to shocks), one would expect nominal prices to change at high frequency too. But they do not. Ergo, it must be the case that (at least some) markets do not clear. Because the market mechanism is now revealed to generate mismatches in supply and demand, there is a clear role for government intervention.

This latter conclusion is taken to be obvious vis-a-vis the market for labor, where the very existence of unemployed workers is taken as prima facie evidence that markets do not clear. The economy, evidently, has a “potential” or “natural” level of GDP or employment, and deviations away from this “long-run” level (output “gaps”) are the product of “shocks” together with the inability of the markets to clear in the short-run. Please enter, the stabilizers (monetary and fiscal policy).

Well, that’s one way to think about it. Certainly, some variant of this line of reasoning is taught to most econ undergrads. The educated layperson is regularly exposed to this view in blog posts of pop economists. Many, if not most, Fed economists (not here in St. Louis) take this view too. A majority opinion, however, does not make it correct or, more precisely, the best interpretation possible. Let me explain.

Walras’ auction and Marshall’s scissors have justifiably had a large influence in the way generations of economists have organized their thinking about price-quantity determination. I am not sure, however, that the underlying assumptions are always well understood. A defining characteristic of these (theoretical) market mechanisms is the assumption of anonymous participants in centralized exchange settings. In a competitive setting, trades are intermediated by a fictitious auctioneer (a metaphor for unmodeled “market forces” that equate supplies and demands). In monopolistically competitive settings, agents set prices (assumed to be linear, and so, inefficient).

I want you dwell on this for a moment: anonymous people trading in centralized exchanges. It is an abstraction adopted in neoclassical and NK theory. The abstraction may be innocuous in some applications, but not for others. What does the abstraction rule out and why might this be important for interpreting price data?

To begin, anonymity rules out the existence of personal trading relationships. Every morning, you wake up and dispose of your goods or labor in some central location to some anonymous group of purchasers (wham bam – thank you ma’am, so to speak). Because trading relationships are absent, the terms of trade are “static” in the sense of being determined on the spot among transients. And because this is so, the spot price has–by assumption–an enormous role to play in determining how resources are allocated in spot exchanges. In particular, if the spot price is for some reason sticky, spot allocations will generally be inefficient (spot markets will not clear).

The theoretical construct of an anonymous spot market is probably not a bad approximation for how some goods are transacted in reality. The market for commodities like wheat and oil come immediately to mind. Goods more so than services, and raw goods more so than processed goods. (Precisely the set of goods that appear to have the most flexible prices in the Bils and Klenow data set.)

It strikes me as self-evident that most labor, as well as many goods and securities, are not transacted in the manner of (say) wheat or equity shares in General Motors. Many markets more closely resemble the marriage market, where durable (or semi-durable) relationships are formed (and terminated) in decentralized exchanges. In bilateral (or multilateral) relationships, your identity and history matter–you are not anonymous. The same is true of many goods and securities markets. Retail and wholesale outlets expend great effort to cultivate relationships with their customers. The same is true in over-the-counter securities markets. Understanding this fact turns out to have, I think, a profound implication for the way in which one interprets price data. Why is this? Let me explain (my goodness, there is a lot of explaining to do!)

When a pair of traders meet bilaterally, there may be gains to trade (a surplus) by forming a relationship and trading over time (until match separation). The gains to trade refer to the capitalized value of the surplus; that is, the present value of a sequence of “spot” surpluses (let me call this the match surplus). In such a relationship, there is no sense in which a Walrasian or Marshallian market “clears” in every spot exchange that occurs throughout the relationship. Instead, the match surplus is divided by way of a bargaining process that prescribes (either implicitly or explicitly) an entire sequence of prices (or wages) over the life of the relationship.

Now here’s the rub. It is not immediately clear whether anything uniquely pins down a negotiated wage/price path over the life of a relationship. In fact, in many theoretical bargaining games, the “equilibrium” price path is largely indeterminate. Evidently, there are many different ways to slice a pie (match surplus). You can pay me a lot today, and a little tomorrow–or the other way around. In many cases, it really doesn’t matter. Either way, both parties presumably allocate their resources in a manner to maximize match surplus, independently of the time path of the terms of trade.

Now, if nominal price changes are costly, one such time path may entail a nominal price that adjusts infrequently over the course of the relationship. Such apparent “stickiness” has, however, little if any allocative consequences over the life of the match (as far as I can tell, this point was originally made by Robert Barro, although I am presently unable to locate the source).

In decentralized relationship markets, the notion of “market clearing” needs to be modified. The simple fact that nominal prices are sticky does not constitute evidence that the relevant market is not clearing in an appropriate sense. Note that this does not imply that there is no role for government intervention. In particular, it is possible that while resources within a match are allocated efficiently, resources economy-wide are not (this may happen, for example, if there are externalities in the search market).

According to this interpretation, unemployment (workers without a job, but who are looking for jobs) is entirely unrelated to the phenomenon of observed nominal wage stickiness. Unemployment is instead an equilibrium phenomenon–the byproduct of search and matching frictions and shocks that alter the value of match formation and job retention. No amount of price flexibility, whether real or nominal, will ever eliminate unemployment in a growing and dynamic economy. (It is possible, of course, for unemployment to be too high or too low, or to vary too much, or not enough).

What accounts for the enduring popularity of sticky price models? First, they do the least violence to Walras and Marshall. Second, they imply that money is non-neutral; something that central bankers are particularly fond of believing in. And third, they appear to rationalize (legitimize) interest rate policies like the Taylor rule.

I’m not sure that these are particularly compelling reasons (although, there may be others — feel free to share). First, perhaps we should do violence to Walras and Marshal (search and bargaining theory is one way to go). Second, it is easy to generate money non-neutrality in models with full price flexibility (and heterogeneous consumers). Third, modeling financial (instead of price-setting) frictions would turn attention away from simple Taylor rules, diverting it to other (arguably more important) aspects of central banking (payments system, lender-of-last-resort, etc.).

In conclusion, I have a hard time taking the sticky price hypothesis seriously. The theory literally implies that if prices were fully flexible, many of the worse properties of recessions would be avoided. There would be no liquidity traps, no financial crises, and no lost decades. Conversely, if prices are sticky (in the theoretical sense), simple government policies, like raising the long-run inflation rate or expanding government spending, can evidently restore something close to economic nirvana. More than one prominent econblogger appears wedded to this view. I remain skeptical.

Disclaimer: This page contains affiliate links. If you choose to make a purchase after clicking a link, we may receive a commission at no additional cost to you. Thank you for your support!

About David Andolfatto 95 Articles

Affiliation: Simon Fraser University and St. Louis Fed

David Andolfatto is a Vice President in the Research Division of the Federal Reserve Bank of St. Louis. He is also a professor of economics at Simon Fraser University.

Professor Andolfatto earned his Ph.D. in economics from the University of Western Ontario in 1994, M.A. and B.B.A. from Simon Fraser University. He was associate professor at the University of Waterloo before moving to Simon Fraser University in 2000.

His current research is focused on reconciling theories of money and banking. His past research has examined questions relating to the business cycle, contract design, bank-runs, unemployment insurance, monetary policy regimes, endogenous debt constraints, and technology diffusion.

Visit: MacroMania, David Andolfatto's Page

Be the first to comment

Leave a Reply

Your email address will not be published.


*

This site uses Akismet to reduce spam. Learn how your comment data is processed.