Advertisements
Advertisements

The last brick – RIP James M. Buchanan

Nobel Prize winning economist and founding father of the Public Choice school James M. Buchanan has died at age 93. His friends and students have already offered many kind words in his memory. Here I quote two of my friends professors Steve Horwtiz and Peter Kurrild-Klitgaard.

Here is Pete:

James M. Buchanan, RIP. If making a difference is what matters, he was one of the five most influential thinkers of the last 50 years. Sharp as a knife into his 90s and always the scholar.

And Steve:

There is much that one can say about him (Buchanan), not the least of which is that he was still intellectually sharp and active into his 90s. In short: he changed the face of economics and politics and advanced the cause of liberty as much as anyone in the second half of the 20th century…

…No one who wishes to talk responsibly about politics can be ignorant of public choice theory. No one should ever invoke the language of market failure (including externalities) without having digested his work on government failure. And people who run around talking about the constitution better be able to understand something of constitutional political economy.

Beyond all of that, he was a role model of the old school scholar: widely read and properly skeptical of turning economics into an engineering discipline. He was, at bottom, a humanist and a liberal in the oldest and best senses of the terms. And best of all: he was utterly unimpressed by degrees from fancy schools.

Buchanan produced an enormous amount of scholarly works including numerous books in his long life. Best known is probably The Calculus of Consent which he co-authored with Gordon Tullock. However, the works that had the biggest influence on my own thinking undoubtedly was “What should economists do?” and “Cost and Choice”.

Even though Buchanan primarily was a constitutional economist and a Public Choice theorist he also contributed to monetary thinking. His so-called brick standard was particularly intriguing. Here is Pete Boettke and Daniel Smith on Buchanan and the brick standard:

James Buchanan, sought to bring his extensive work on rule-making to bear in envisioning a monetary regime that could operate within a contemporary democratic setting. From the start, Buchanan (1999[1962]) eschewed the ‘presuppositions of Harvey road’ that held that economic policy would be crafted and implemented by a group of benevolent and enlightened elites. Buchanan set out to make the case for a monetary regime using comparative institutional analysis that compared monetary regimes in real, not ideal settings.

Buchanan (1999[1962]) believed that it was not so much the specific type of monetary regime adopted, but the set of rules that defined that regime. Buchanan argued that the brick standard, a labor standard, or a manager confined by well-defined rules, would all put a stop to the government growth let loose by the fiscal profligacy encouraged by the wide scale acceptance of Keynesian ideas in the political realm (see Buchanan and Wagner (2000[1977]). The brick standard, as defined by Buchanan, would be a monetary regime that allowed anyone to go to the mint with a standard building brick of a specified quality and exchange it for the monetary unit, and vice versa. As the general price level fluctuated, market forces would cause automatic adjustments as people would exchange money for bricks when the price level rose above the equilibrium level, and bricks for money when the price level fell below the equilibrium level. Under this regime, market actors, guided by profits and losses would be the mechanism that achieved price predictability, not a government-entity entrusted with the goal of achieving it. In addition, a brick standard would, most likely, divorce domestic monetary policy from international balance of payment and exchange rate policies due to the fact that a brick standard would be unsuitable for those purposes.

For Buchanan (1999[1962], 417), it came down to a toss-up between a brick type standard and a limited manager. What mattered most for monetary predictability was that the rules that set up the monetary regime must be of the ‘constitutional’ variety. In other words, the rules must be set to be ‘relatively absolute absolutes’ in order to protect them from tampering.

R.I.P. James M. Buchanan

—-

Update – other economists and scholars on James Buchanan:

Steve Horwitz

Daniel Kuehn

Eamonn Butler

Don Boudreaux (also from Don in 2005 and Don in WSJ)

Mark D. White

Grover Cleveland

Mario Rizzo

David Boaz

Robert Higgs

David Henderson

Alex Tabarrok

Randall Holcombe

Peter Boettke

Ryan Young

Bill Woolsey

Veronique de Rugy

Nick Gillespie

Arnold Kling

Brad DeLong

Christian Bjørnskov (in Danish)

Tyler Cowen (more from Tyler Cowen)

Lenore Ealy

Garett Jones

Charles Rowley

Edward Lopez

The Economist: Free Exchange

Buchanan

Advertisements

Will anybody read this post if I put “data revisions” in the headline?

Opponents of NGDP level targeting often argue that nominal GDP is problematic as national account data often is revised and hence one would risk targeting the wrong data and that that could lead to serious policy mistakes. I in general find this argumentation flawed and find that it often based on a misunderstanding about what NGDP level targeting is about.

First of all let me acknowledge that macroeconomic data in general tend to undergo numerous revisions and often the data quality is very bad. That goes for all macroeconomic data in all countries. Some have for example argued that the seasonal adjustment of macroeconomic data has gone badly wrong in many countries after 2008. Furthermore, it is certainly not a nontrivial excise to correct data for different calendar effects – for example whether Easter takes place in February or March. Therefore, macroeconomic data are potentially flawed – not only NGDP data. That said, in many countries national account numbers – including GDP data – are often revised quite dramatically.

However, what critics fail to realise is that Market Monetarists and other proponents NGDP level targeting is arguing to target the present or history level of NGDP, but rather the future NGDP level. Therefore, the real uncertainty is not data revisions but about the forecasting abilities of central banks. The same is of course the case for inflation targeting – even though it often looks like the ECB is targeting historical or present inflation the textbook version of inflation forecasting clearly states that the central bank should forecast future inflation. In that sense future NGDP is not harder to forecast than future inflation.

I believe, however, there is pretty strong evidence that central banks in general are pretty bad forecasters and the forecasts are often biased in one or the other direction. There is therefore good reason to believe that the market is better at predicting nominal variables such as NGDP and inflation than central banks. Therefore, Market Monetarists – and Bill Woolsey and Scott Sumner particular – have argued that central banks (or governments) should set up futures markets for NGDP in the same way the so-called TIPS market in the US provides a market forecast for inflation. As such a market is a real-time “forecaster” and there will be no revisions and as the market would be forecasting future NGDP level the market would also provide an implicit forecast for data revisions – unlike regular macroeconomic forecasts. By using NGDP futures to guide monetary policy the central banks would not have to rely on potentially bias in-house forecasts and there would be no major problem with potential data revisions.

Furthermore, arguing that NGDP data can be revised might point to a potential (!) problem with NGDP, but at the same time if one argues that national account data in general is unreliable then it is also a problem for an inflation targeting central bank. The reason is that most inflation targeting central banks historical have use a so-called Taylor rule (or something similar) to guide monetary policy – to see whether interest rates should be increased or lowered.

We can write a simple Taylor rule in the following way:

R=a(p-pT)+b(y-y*)

Where R is the key policy interest rate, a and b are coefficients, p is actual inflation pT is the inflation target, y is real GDP and y* is potential GDP.

Hence, it is clear that a Taylor rule based inflation target also relies on national account data – not NGDP, but RGDP. And even more important the Taylor rule dependent on an estimate of potential real GDP.

Anybody who have ever seriously worked with trying to estimate potential GDP will readily acknowledge how hard it is to estimate and there are numerous methods to estimate potential GDP and the different methods – for example production function or HP filters – that would lead to quite different results. So here we both have the problem with data revisions AND the problem with estimating potential GDP from data that might be revised.

This is particularly important right now as many economists have argued that potential GDP has dropped in the both the US and the euro zone on the back of the crisis. If that is in fact the case then for a given inflation target monetary policy will have to be tighter than if there has not been a drop in potential GDP. Whether or not that has been a case is impossible to know – we might know it in 5 or 10 years, but now it is impossible to say whether euro zone trend growth is 1.2% or 2.2%. Who knows? That is a massive challenge to inflation targeting central bankers.

Contrary to this changes in potential GDP or for that matter short-term supply shocks (for example higher oil prices) will have no impact on the conduct on monetary policy as the NGDP targeting central bank will not concern itself with the split between real GDP growth and inflation.

An example of the problems of how we measure inflation is the ECB two catastrophic interest rate hikes in 2011. The ECB twice hiked interest rates and in my view caused a massive escalation of the euro crisis. What the ECB reacted to was a fairly steep increase in headline consumer prices. However, in hindsight (and for some of us also in real-time) it is (was) pretty clear that there was not a real increase in inflationary pressures in the euro zone. The increase in headline consumer price inflation was caused by supply shocks and higher indirect taxes, which is evident from comparing the GDP deflator (which showed no signs of escalating inflationary pressures) with consumer prices inflation. Again, there would have been no mixing up of demand and supply shocks if the ECB had targeted the NGDP level instead. From that it was very clear that monetary conditions were very tight in 2011 and got even tighter as the ECB moved to hike interest rates. Had the ECB focused on the NGDP level then it would obviously have realised that what was needed was monetary easing and not monetary tightening and had the ECB acted on that then the euro crisis likely would already have been over.

It should also be noted that even though NGDP numbers tend to be revised that does not mean that the quality of the numbers as such are worse than inflation data. In fact inflation data are often of a very dubious character. An example is the changes in the measurement of consumers prices in the US after the so-called Boskin report came out in 1996. The report concluded that US inflation data overestimated inflation by more than 1% – and therefore equally underestimated real GDP growth. Try to plug that into the Taylor rule above. That means that p is lower and y* is higher – both would lead to the conclusion that interest rates should be lowered. Some have claimed that the revisions made to the measurement of consumer prices in the US caused the Federal Reserve to pursue an overly easy monetary stance in the end of the 1990s, which caused the dot-com bubble. I have some sympathy for this view and at least I know that had the Fed been following a strict NGDP level targeting regime at the end of the 1990s then it would have tighten monetary faster and more aggressively than it did in particularly 1999-2000 as the Fed would have disregarded the split between prices and real GDP and instead focused on the escalation of NGDP growth.

Concluding, yes national account numbers – including NGDP numbers – are often revised and that creates some challenges for NGDP targeting. However, the important point is that present and historical data is not important, but rather the expectation of the future NGDP, which an NGDP futures market (or a bookmaker for that matter) could provide a good forecast of (including possible data revisions). Contrary to this inflation targeting central banks also face challenges of data revisions and particularly a challenge to separate demand shocks from supply shocks and estimating potential GDP.
Therefore, any critique of NGDP targeting based on the “data revision”-argument is equally valid – or even more so – in the case of inflation targeting. Hence, worries about data quality is not an argument against NGDP targeting, but rather an argument for scrapping inflation targeting – the ECB with its unfortunate actions proved that in both 2008 and 2011.

%d bloggers like this: