Will anybody read this post if I put “data revisions” in the headline?

Opponents of NGDP level targeting often argue that nominal GDP is problematic as national account data often is revised and hence one would risk targeting the wrong data and that that could lead to serious policy mistakes. I in general find this argumentation flawed and find that it often based on a misunderstanding about what NGDP level targeting is about.

First of all let me acknowledge that macroeconomic data in general tend to undergo numerous revisions and often the data quality is very bad. That goes for all macroeconomic data in all countries. Some have for example argued that the seasonal adjustment of macroeconomic data has gone badly wrong in many countries after 2008. Furthermore, it is certainly not a nontrivial excise to correct data for different calendar effects – for example whether Easter takes place in February or March. Therefore, macroeconomic data are potentially flawed – not only NGDP data. That said, in many countries national account numbers – including GDP data – are often revised quite dramatically.

However, what critics fail to realise is that Market Monetarists and other proponents NGDP level targeting is arguing to target the present or history level of NGDP, but rather the future NGDP level. Therefore, the real uncertainty is not data revisions but about the forecasting abilities of central banks. The same is of course the case for inflation targeting – even though it often looks like the ECB is targeting historical or present inflation the textbook version of inflation forecasting clearly states that the central bank should forecast future inflation. In that sense future NGDP is not harder to forecast than future inflation.

I believe, however, there is pretty strong evidence that central banks in general are pretty bad forecasters and the forecasts are often biased in one or the other direction. There is therefore good reason to believe that the market is better at predicting nominal variables such as NGDP and inflation than central banks. Therefore, Market Monetarists – and Bill Woolsey and Scott Sumner particular – have argued that central banks (or governments) should set up futures markets for NGDP in the same way the so-called TIPS market in the US provides a market forecast for inflation. As such a market is a real-time “forecaster” and there will be no revisions and as the market would be forecasting future NGDP level the market would also provide an implicit forecast for data revisions – unlike regular macroeconomic forecasts. By using NGDP futures to guide monetary policy the central banks would not have to rely on potentially bias in-house forecasts and there would be no major problem with potential data revisions.

Furthermore, arguing that NGDP data can be revised might point to a potential (!) problem with NGDP, but at the same time if one argues that national account data in general is unreliable then it is also a problem for an inflation targeting central bank. The reason is that most inflation targeting central banks historical have use a so-called Taylor rule (or something similar) to guide monetary policy – to see whether interest rates should be increased or lowered.

We can write a simple Taylor rule in the following way:

R=a(p-pT)+b(y-y*)

Where R is the key policy interest rate, a and b are coefficients, p is actual inflation pT is the inflation target, y is real GDP and y* is potential GDP.

Hence, it is clear that a Taylor rule based inflation target also relies on national account data – not NGDP, but RGDP. And even more important the Taylor rule dependent on an estimate of potential real GDP.

Anybody who have ever seriously worked with trying to estimate potential GDP will readily acknowledge how hard it is to estimate and there are numerous methods to estimate potential GDP and the different methods – for example production function or HP filters – that would lead to quite different results. So here we both have the problem with data revisions AND the problem with estimating potential GDP from data that might be revised.

This is particularly important right now as many economists have argued that potential GDP has dropped in the both the US and the euro zone on the back of the crisis. If that is in fact the case then for a given inflation target monetary policy will have to be tighter than if there has not been a drop in potential GDP. Whether or not that has been a case is impossible to know – we might know it in 5 or 10 years, but now it is impossible to say whether euro zone trend growth is 1.2% or 2.2%. Who knows? That is a massive challenge to inflation targeting central bankers.

Contrary to this changes in potential GDP or for that matter short-term supply shocks (for example higher oil prices) will have no impact on the conduct on monetary policy as the NGDP targeting central bank will not concern itself with the split between real GDP growth and inflation.

An example of the problems of how we measure inflation is the ECB two catastrophic interest rate hikes in 2011. The ECB twice hiked interest rates and in my view caused a massive escalation of the euro crisis. What the ECB reacted to was a fairly steep increase in headline consumer prices. However, in hindsight (and for some of us also in real-time) it is (was) pretty clear that there was not a real increase in inflationary pressures in the euro zone. The increase in headline consumer price inflation was caused by supply shocks and higher indirect taxes, which is evident from comparing the GDP deflator (which showed no signs of escalating inflationary pressures) with consumer prices inflation. Again, there would have been no mixing up of demand and supply shocks if the ECB had targeted the NGDP level instead. From that it was very clear that monetary conditions were very tight in 2011 and got even tighter as the ECB moved to hike interest rates. Had the ECB focused on the NGDP level then it would obviously have realised that what was needed was monetary easing and not monetary tightening and had the ECB acted on that then the euro crisis likely would already have been over.

It should also be noted that even though NGDP numbers tend to be revised that does not mean that the quality of the numbers as such are worse than inflation data. In fact inflation data are often of a very dubious character. An example is the changes in the measurement of consumers prices in the US after the so-called Boskin report came out in 1996. The report concluded that US inflation data overestimated inflation by more than 1% – and therefore equally underestimated real GDP growth. Try to plug that into the Taylor rule above. That means that p is lower and y* is higher – both would lead to the conclusion that interest rates should be lowered. Some have claimed that the revisions made to the measurement of consumer prices in the US caused the Federal Reserve to pursue an overly easy monetary stance in the end of the 1990s, which caused the dot-com bubble. I have some sympathy for this view and at least I know that had the Fed been following a strict NGDP level targeting regime at the end of the 1990s then it would have tighten monetary faster and more aggressively than it did in particularly 1999-2000 as the Fed would have disregarded the split between prices and real GDP and instead focused on the escalation of NGDP growth.

Concluding, yes national account numbers – including NGDP numbers – are often revised and that creates some challenges for NGDP targeting. However, the important point is that present and historical data is not important, but rather the expectation of the future NGDP, which an NGDP futures market (or a bookmaker for that matter) could provide a good forecast of (including possible data revisions). Contrary to this inflation targeting central banks also face challenges of data revisions and particularly a challenge to separate demand shocks from supply shocks and estimating potential GDP.
Therefore, any critique of NGDP targeting based on the “data revision”-argument is equally valid – or even more so – in the case of inflation targeting. Hence, worries about data quality is not an argument against NGDP targeting, but rather an argument for scrapping inflation targeting – the ECB with its unfortunate actions proved that in both 2008 and 2011.

Leave a comment

5 Comments

  1. The measurement error side just doesn’t get what NGDPLT is about. They don’t grasp that the goal is to stabilize ‘confidence’, not to actually control the NGDP level with perfect information. Still, it is one of the better critiques of NGDPLT that I’ve seen. I looked into the issue for the U.S. last year, and found that the revisions weren’t a huge factor http://economicsophisms.com/2011/10/23/are-gdp-revisions-a-big-deal/.

    Still, I think it is high time that governments start buying transaction data from the major payment’s companies (visa, mastercard) on a weekly basis. I suspect that would give us a very good driver for consumption equations, and other data could be used to fill in investment and exports.

    Reply
  2. John Hall

     /  January 9, 2013

    I agree with a decent amount of what you said, but I do think that data revisions lead to some practical problems when implementing a NGDP targeting system that are often glossed over. U.S. GDP for any given quarter is revised like five times (advance, preliminary, final, annual, five-year). Consider a hypothetical situation where I hold a NGDP futures contract for 1 year out and there is a rebasing that changes the very first period of NGDP back in like 1947, but the percent change in every other period is the same. Should holders of NGDP futures contracts lose or make money because of such a situation?

    Reply
  1. UK NGDP Targeting: Sceptics Round-Up « uneconomical
  2. Damn flu…and a couple of interesting posts you should read « The Market Monetarist
  3. Thank´s to Christy Romer | Historinhas

Leave a Reply to jpirvingCancel reply

Discover more from The Market Monetarist

Subscribe now to keep reading and get access to the full archive.

Continue reading