Guest post: Why “Integral” is wrong about Price Level Targeting (by J. Pedersen)

I have always said that my blog should be open to debate and I am happy to have guest posts from clever and inlighted economists (and non-economists) about monetary matters. I am therefore delighted that my good friend and colleague Jens Pedersen (I used to be his boss…) has offered to write a reply to “Integral’s” post on price level targeting versus NGDP level targeting. Jens who recently graduated from University of Copenhagen. His master thesis was about Price Level Targeting.

Jens, take it away…

Lars Christensen

Guest post: Why “Integral” is wrong about Price Level Targeting

by Jens Pedersen

The purpose of this comment is two-fold. First, I argue that ”Integral” in his guest post ”Measuring the stance of monetary policy through NGDP and prices” is wrong when he concludes that the Federal Reserve has done a fine job in achieving price level path stability and by this measure does maintain a tight stance on monetary policy. Second, I present a way of evaluating the Fed’s monetary policy stance based on the theory of optimal monetary policy.

“Integral” assumes that the Federal Reserve has targeted an implicit linear path for the price level since the beginning of the Great Moderation. Following Pedersen (2011) using the deviation in the price level from a linear trend (or the deviation in nominal GDP) to evaluate the stance of monetary policy needs to take into account the potential breaks and shifts in the trend following changes in the monetary policy regime. Changes in the monetary policy committee, changes in the mandate, targets etc. may lead to a shift or break in the targeted trend. Hence, the current implicit targeted trend for the price level (or nominal GDP) should correctly be estimated from February 2006 to take into account the change in president of the FOMC, or alternatively take account of this possible shift or break.

Changing the estimation period changes the conclusions of “Integral’s” analysis. Below, I illustrate the deviation in the log core PCE index from the estimated linear trend over the period 2006:2-2006:12. As the figure show Fed has significantly undershoot its implicit price level target and has not achieved price level path stability during the Great Recession. Currently, the price level gap is around 3% and increasing. Hence, looking at the deviation in the price level from the implicit price level trend does indeed suggest that monetary policy should be eased.

Following, Clarida et. al. (1999), Woodford (2003) and Vestin (2006) optimal monetary policy is a dual mandate which requires the central bank to be concerned with the deviation in output from its efficient level and the deviation in the price level from its targeted level. The first-best way of evaluating Fed’s monetary policy stance should be relative to the optimal solution to monetary policy.

However, this method requires a clear reference to the output gap. Common practice has been to calculate the output gap as the deviation in real output from its HP-filtered trend. This practice is by all means a poor consequence of the RBC view of economic fluctuations. Theoretically it fails to take account of the short run fluctuations in the efficient level of output. Empirically it does a poor job at estimating the potential output near the end points of the sample.

Fortunately, Jordi Galí in Galí (2011) shows how to circumvent these problems and derive a theoretical consistent output gap defined as the deviation in real output from its efficient counterpart. The efficient level of output corresponds to the first-best allocation in the economy, i.e. the output achieved when there are no nominal rigidities or imperfections present. Galí further shows how this output gap can be derived using only the observed variables of the unemployment rate and the labour income share.

The chart below depicts the efficiency gap in the US economy. Note, that this definition does not allow positive values. It is clear from the figure that at present there is significant economic slack in the US economy of historic dimension. The US output gap is currently almost 6.5% and undershoots its natural historical mean by more than 1.5%-points.

Hence, the present price level gap and output gap reveal that the Federal Reserve has not conducted optimal monetary policy during the Great Recession. Furthermore, the analysis suggest that Fed can easily increase inflation expectations by committing to closing the price level gap. This should give the desired boost to demand and spending and further close the output gap.

References:

Clarida, et al. (1999), “The Science of Monetary Policy: A New Keynesian Perspective”

Galí (2011), “Unemployment Fluctuations and Stabilization Policies: A New Keynesian Perspective”

Pedersen (2011), “Price Level Targeting: Optimal Anchoring of Expectations in a New Keynesian Model”

Vestin (2006), “Price Level Targeting versus Inflation Targeting

Woodford (2003), “Interest and Prices”

Advertisement

Guest blog: The Integral Reviews: Paper 3 – Hall (2009)

Guest blog – The Integral Reviews: Paper 3 – Hall (2009)
by “Integral”

Reviewed: Robert Hall (2009), “By How Much Does GDP Rise If the Government Buys More Output?” NBER WP 15496

Executive summary

The average government purchases multiplier is about 0.5, taking into account empirical and structural evidence. The only way to get “large” multipliers of 1.6 is to assume a large degree of non-optimizing behavior, an inflexible wage rate, at the zero lower bound on nominal interest rates, and assuming monetary policy is completely ineffective at influencing aggregate demand but the fiscal authority retains that influence.

The key ingredients to generating a large output multiplier are sticky wages/prices, a highly countercyclical markup ratio, and “passive” monetary policy which does not counteract the fiscal expansion.

The assumptions that underlie “the effectiveness of monetary policy” (sticky prices and a countercyclical markup) also drive “the effectiveness of fiscal policy.” The two are similar in that respect.

Summary

Hall provides a convenient overview of the state of economic knowledge about the government purchases multiplier. He does this in four steps: simple regression evidence, VAR evidence, structural evidence from RBC models, and structural evidence from various sticky-price/sticky-wage models.

Empirical evidence begins with the simple OLS regression framework. Hall obtains the output multiplier by regressing the change in military expenditures (a proxy for the exogenous portion of government spending) on the change in output. He finds multipliers significantly larger than zero but less than unity, mostly in the neighborhood of one-half. This estimate of the “average multiplier” is confounded by two problems: (1) the implied multiplier be taken as a lower bound rather than an unbiased estimate due to omitted variable bias, and (2) the estimates are driven entirely by observations during WWII and the Korean War.

The VAR approach produces a range of estimates. Hall surveys five prior studies and finds that the government purchases multiplier is non-negative upon impact across all studies and consistently less than unity, but there is much variation in the exact point estimate. The VAR approach typically suffers the same omitted variable bias as OLS.

Hall then turns to a review of the structural evidence. He first shows the standard RBC result that if wages and prices are flexible, the output multiplier is essentially zero or even negative. While a useful benchmark this is not particularly useful for applied work.

Adding wage frictions forces laborers to operate off of the labor supply curve, so output could plausibly expand from an increase in government demand. Hall indeed finds that the multiplier is higher in small-scale NK models and depends on consumer behavior. With consumers pinned down by the permanent-income/life-cycle model, multipliers tend to range around 0.7. If consumers are rule-of-thumb or iiquidity constrained, one finally finds multipliers above unity, in the neighborhood of 1.7, in the presence of the zero lower bound on nominal interest rates.

Review

The empirical evidence is plagued by persistent endogeniety and omitted-variable bias, which Hall frankly acknowledges. Identification is extraordinarily difficult in macroeconomics; as a practical matter it is impossible to untangle all of the interrelated shocks the economy experiences each year.

On the theory side, Scott Sumner would consider this entire exercise a waste of time: the Fed steers the nominal economy and acts to offset nominal shocks; government shocks are a nominal shock, so the Fed will act so as to ensure that the government expenditures multiplier is zero, plus or minus some errors in the timing of fiscal and monetary policy.

Is this a good description of the world? On average over the postwar period, a $1 exogenous change in government spending has led to a $0.50 increase in output; excluding the WWII and Korean War data drive this number down significantly. As a first-order approximation the fiscal multiplier is likely zero on average. But we don’t care about the average, we care about the marginal multiplier, at the zero bound. In that scenario, multipliers are on average higher but still below unity. A crucial open question is to what degree the monetary authority “loses control” of nominal aggregates at the zero lower bound, and to what degree fiscal policy is impacted if the monetary authority is “helpless”. (If we are in a situation where the Fed cannot move nominal aggregates, why wouldn’t Congress be similarly constrained?)

Hall’s paper does not explicitly discuss monetary policy. However, adding a monetary authority to his models would only reduce the already-low multipliers that Hall uncovers. His point, that one cannot plausibly obtain multipliers in excess of unity in a modern macro model, is already well-established even without explicitly accounting for the central bank.

Guest blog: The Integral Reviews: Paper 2 – Ball (1999)

Guest Blog – The Integral Reviews: Paper 2 – Ball (1999)
By “Integral”

Reviewed: Laurence Ball (1999), “Efficient Rules for Monetary Policy.” International Finance 2(1): pp. 63–83

also featuring

Henrik Jensen (2002), “Targeting Nominal Income Growth or Inflation?” The American Economic Review 92(4): pp. 928–956.

Glenn Rudebusch (2002), “Assessing nominal income rules for monetary policy with model and data uncertainty.” The Economic Journal 112(479): pp. 402–432.

Introduction

Larry Ball’s 1999 paper makes two claims that are relevant for Market Monetarists. One is uninteresting, the second is interesting.

1. NGDP targeting is actively destabilizing
2. NGDP targeting is inferior to inflation targeting in a wide range of contexts.

The monetary economics blogosphere has analyzed the first claim to exhaustion. For a review see Adam P’s first post on the paper, replies by Scott Sumner and Bill Woolsey, Adam’s rejoinders (1,2), Adam again, and a contribution from Nick Rowe.

The result of that exchange was identical to the result of the academic response to Ball’s paper: the first claim is generally false and holds only under restrictive assumptions, but the second result is more robust and is typically left unaddressed during responses. For detailed responses to the stability claim see McCallum (1997) and Dennis (2001).

I’m going to take a stab at the second claim. Let’s start with Ball’s model.

Model

Ball sets up a simple two-equation model, though containing the essential features of the larger-scale models usually employed for policy analysis. The first equation is an IS curve that relates output to its own lag and the lagged interest rate. The second is a Phillips curve that relates inflation to its own lag and lagged output. Mathematically we have:

p(t) = p(t-1) + a*y(t-1) + n(t)
y(t) = c*y(t-1) – b*r(t-1) +  e(t)

where p is inflation, y (log) output, and r the interest rate, all measured relative to their steady-state values.

The model contains two important features: a unit root in inflation and a lag structure in which the central bank can affect output one year out but inflation only two years out. This model is trivially simple: there is no explicit accounting for private-sector expectations and there is only a single transmission mechanism of monetary policy, from the interest rate to output to inflation. The model is closed with an interest-rate rule chosen by the central bank to hit some objective.

The unit root is key for Ball’s first claim; the lag structure is key for his second claim. In a model where the interest rate affects output and inflation with a lag in the Phillips Curve, targeting nominal GDP causes the economy to cycle, hitting the NGDP target every period but doing so by causing undesirable oscillations in output and inflation. However, if one changes the Phillips curve to eliminate the lag in ouptut, nominal GDP targeting becomes an extremely attractive alternative to inflation targeting. It is difficult to prove this in closed-form so I will appeal to two recent simulation-based papers.

Assessment

Rudebusch (2002) tests the efficacy of two distinct NGDP targeting rules against a Taylor Rule. All three policy rules are evaluated relative to a social loss function which weighs the variance of output, inflation, and the nominal interest rate. Rudebusch’s model is identical to Ball’s except for adding a role for private-sector expectations. His simulation results mirror Ball’s theoretical result: for reasonable weighs on the forward-looking and backward-looking elements of the Phillips Curve, NGDP targeting severely underperforms relative to the Taylor Rule.

A second simulation is provided by Jensen (2002), whose model is identical to Rudebusch’s save for the lag structure: in Jensen’s model output, inflation and the interest rate are co-determined simultaneously. He tests five different central bank rules, each calibrated to be optimal within their own class: the fully optimal pre-commitment rule, a policy of pure discretion, inflation targeting, nominal income targeting, and a “combination” regime of targeting a weighted average of NGDP and inflation. He finds that NGDP targeting oupterforms inflation targeting in nine parameter specifications covering many economically “interesting” cases. In the simulation where supply shocks dominate, a case of much concern to Market Monetarists, NGDP targeting strongly outperforms inflation targeting and indeed comes close to mimicking the results of the fully-optimal rule.

So what is left of Ball’s claim? Rudebusch shows that NGDP targeting provides subpar performance in a model with lags in the Phillips curve. However it is equally true that NGDP targeting outperforms inflation targeting in a model without lags in the Phillips curve. The exercise provides two main results. First, the desirability of NGDP targeting is sensitive to the lag structure of a model, and of course the relevance of the lag structure remains an empirical question. This undermines NGDP targeting’s appeal as a rule which is robust to model structure. Second, the desirability of NGDP targeting is robust within the class of IS-PC models that employ a properly microfounded Phillips curve

References

Ball, Laurence. 1999. “Efficient Rules for Monetary Policy.” International Finance 2(1): pp. 63–83

Dennis, Richard. 2001. “Inflation expectations and the stability properties of nominal GDP targeting.” The Economic Journal 111(468):103–113.

Jensen, Henrik. 2002. “Targeting Nominal Income Growth or Inflation?” The American Economic Review 92(4):928–956.

McCallum, Bennett T. 1997. “The Alleged Instability of Nominal Income Targeting.” NBER Working Paper No. 6291.

Rudebusch, Glenn D. 2002. “Assessing nominal income rules for monetary policy with model and data uncertainty.” The Economic Journal 112(479): 402–432.

Two technical notes

1. Ball and Rudebusch measure society’s loss via the weighted sum of the variances of output, inflation, and the interest rate. Jensen by contrast uses a societal loss function that depends on the sum of weighted squared deviations of output and inflation from their steady-state values. Cursory inspection of Jensen’s tables shows that if one reformulates his societal loss in terms of variances, IT and NGDPT deliver outcomes which are nearly equivalent. However even if one uses variances NGDPT still weakly outperforms IT in most specifications.

2. The NGDPT and IT regimes in Jensen are themselves “mixed” regimes which put some weight on the output gap. Given that all inflation targeting in practice gives some weight to the output gap, the inclusion of such a term in both rules is innocuous.

——

See Integral’s earlier guest post: “The Integral Reviews: Paper 1 – Koenig (2011)”

The Integral Reviews: Paper 1 – Koenig (2011)

I am always open to accept different guest blogs and I therefore very happy that “Integral” has accepted my invitation to do a number of reviews of different papers that are relevant for the discussion of monetary theory and the development of Market Monetarism.

“Integral” is a regular commentator on the Market Monetarist blogs. Integral is a pseudonym and I am familiar with his identity.

We start our series with Integral’s review of Evan Koeing’s paper “Monetary Policy, Financial Stability, and the Distribution of Risk”. I recently also wrote a short (too short) comment on the paper so I am happy to see Integral elaborating on the paper, which I believe is a very important contribution to the discussion about NGDP level targeting. Marcus Nunes has also earlier commented on the paper.

Lars Christensen

The Integral Reviews: Papers 1 – Koenig (2011)
By “Integral”

Reviewed: Evan F. Koenig, “Monetary Policy, Financial Stability, and the Distribution of Risk.” FRB Dallas Working Paper No.1111

Consider the typical debt-deflation storyline. An adverse shock pushes the price level down (relative to expected trend) and increases consumers’ real debt load. This leads to defaults, liquidation, and general disruption of credit markets. This is often-times used as justification for the central bank to target inflation or the price level, to mitigate the effect of such shocks on financial markets.

Koenig takes a twist on this view that is quite at home to Market Monetarists: he notes that since nominal debts are paid out of nominal income, any adverse shock to income will lead to financial disruption, not just shocks to the price level. One conclusion he draws out is that the central bank can target nominal income to insulate the economy against debt-deflation spirals.

He also makes a theoretical point that will resonate well with Lars’ discussion of David Eagle’s work. Recall that Eagle views NGDP targeting as the optimal way to prevent the “monetary veil” from damaging the underlying “real” economy, which he views as an Arrow-Debreu type general equilibrium economy. Koenig makes a similar observation with respect to financial risk (debt-deflation) and in particular the distribution of risk.

In a world with complete, perfect capital markets, agents will sign Arrow-Debreu state-contingent contracts to fully insure themselves against future risk (think shocks). Money is a veil in the sense that fluctuations in the price level, and monetary policy more generally, have no effect on the distribution of risk. However, the real world is much incomplete in this regard and it is difficult to imagine that one could perfectly insure against future income, price, or nominal income uncertainty. Koenig thus dispenses of complete Arrow-Debreau contracts and introduces a single debt instrument, a nominal bond. This is where the central bank comes in.

Koenig considers two policy regimes: one in which the central bank commits to a pre-announced price-level target and one in which the central bank commits to a pre-announced nominal-income target. While the price-level target neutralizes uncertainty about the future price level, it provides no insulation against fluctuations in future output. He shows that a price level target will have adverse distributional consequences: harming debtors but helping creditors. Note that this is exactly the outcome that a price-level target is supposed to avoid. By contrast a central bank policy of targeting NGDP fully insulates the economy from the combination of price and income fluctuations. It will not only have no adverse distributional consequences, it obtain a consumption pattern across debtors and creditors which is identical to that which is obtained when capital markets are complete.

At an empirical level, Koenig documents that loan delinquency is more closely related to surprise changes in NGDP than in P, providing corroborating evidence that it is nominal income, not the price level, which matters for thinking about the sustainability of the nominal debt load.

Koenig’s conclusion is succinct:

“If there are complete markets in contingent claims, so that agents can insure themselves against fluctuations in aggregate output and the price level, then “money is a veil” as far as the allocation of risk is concerned: It doesn’t matter whether the monetary authority allows random variation in the price level or nominal value of output. If such insurance is not available, monetary policy will affect the allocation of risk. When debt obligations are fixed in nominal terms, a price-level target eliminates one source of risk (price-level shocks), but shifts the other risk (real output shocks) disproportionately onto debtors. A more balanced risk allocation is achieved by allowing the price level to move opposite to real output. An example is presented in which the risk allocation achieved by a nominal-income target reproduces exactly the allocation observed with complete capital markets. Empirically, measures of financial stress are much more strongly related to nominal-GDP surprises than to inflation surprises. These theoretical and empirical results call into question the debt-deflation argument for a price-level or inflation target. More generally, they point to the danger of evaluating alternative monetary policy rules using representative-agent models that have no meaningful role for debt.”

%d bloggers like this: