The Fed was targeting the Labor Market Conditions Index 30 years before it was invented

Today the Federal Reserve published its much-hyped new Labor Market Conditions Index (LMCI).

This is how the Fed describes the Index:

The U.S. labor market is large and multifaceted. Often-cited indicators, such as the unemployment rate or payroll employment, measure a particular dimension of labor market activity, and it is not uncommon for different indicators to send conflicting signals about labor market conditions. Accordingly, analysts typically look at many indicators when attempting to gauge labor market improvement. However, it is often difficult to know how to weigh signals from various indicators. Statistical models can be useful to such efforts because they provide a way to summarize information from several indicators…

…A factor model is a statistical tool intended to extract a small number of unobserved factors that summarize the comovement among a larger set of correlated time series.

In our model, these factors are assumed to summarize overall labor market conditions. What we call the LMCI is the primary source of common variation among 19 labor market indicators. One essential feature of our factor model is that its inference about labor market conditions places greater weight on indicators whose movements are highly correlated with each other. And, when indicators provide disparate signals, the model’s assessment of overall labor market conditions reflects primarily those indicators that are in broad agreement.

The included indicators are a large but certainly not exhaustive set of the available data on the labor market, covering the broad categories of unemployment and underemployment, employment, workweeks, wages, vacancies, hiring, layoffs, quits, and surveys of consumers and businesses.

So is there really anything new in all this? Well, not really. To me it is just another indicator for the US business cycle. The graph below illustrates this.

LMCI FF rate

The graph shows the relationship between on the one hand the cumulative Labor Market Conditions Index and on the other hand the year-on-year change in the real Fed Funds rate. I have deflated the Fed Funds rate with the core PCE deflator.

The picture is pretty clear – since the mid-1980s the Fed has tended to increase real rates, when “Labor Market Conditions” have improved and cut rates when labor market conditions have worsened. There is really nothing new in this – it is just another version of the Taylor rule or the Mankiw rule, which capture the Fed’s Lean-Against-the-Wind regime during the Great Moderation. I am hence sure that you could estimate a nice rule for the Fed funds rate for period 1985-2007 based on the LMCI and PCE core inflation. I might return to that in a later post…

That said, there seems to have been a “structural” break in the relationship around 2001/2. Prior to that the relationship was quite close, but since then the relationship has become somewhat weaker.

Anyway that is not really important. My point is just that the LMCI is not really telling us much new. That said, the LCMI might help the Fed to communicate better about it policy rule (I hope), but why bother when a NGDP level target would be so much better than trying to target real variables (fancy or not)?

PS don’t be fooled by the graph into concluding the Fed Funds rate should be hiked. To conclude that you at the least need an estimate relationship between the LMCI and the Fed Funds rate.

Mankiw rule tells the Fed to tighten

The most famous monetary policy rule undoubtedly is the so-called Taylor rule, which basically tells monetary policy makers to set the key monetary policy interest rates as a function of on the one hand the inflation rate relative to the inflation target and on the other hand the output gap.

The Taylor rule is rather simple and seems to at least historically have been a pretty good indicator of the actual policy followed by particularly the Federal Reserve. Often the Taylor rule is taken to be the “optimal” monetary policy rule. That of course is not necessarily the case. Rather one should see the Taylor rule as a empirical representation of actual historical Fed policy.

A similar rule which has gotten much less attention than the Taylor rule, but which essentially is the same thing is the so-called Mankiw rule. Greg Mankiw originally spelled out his rule in a paper on US monetary policy in the 1990s.

The beauty of the Mankiw rule is that it is extremely simple as it simply says that the Fed is setting the fed funds rate as function of the difference between core inflation (PCE) and the US unemployment rate (this of course is also the Fed’s “dual mandate”). Here is the original rule from Mankiw’s paper:

Federal funds rate = 8.5 + 1.4 (Core inflation – Unemployment)

The graph below shows the original Mankiw rule versus actual Fed policy.

Mankiw rule

I have also added a Mankiw rule estimated on the period 2000-2007: Federal funds rate = 9.9 + 2.1 (Core inflation – Unemployment)

We see the Mankiw rule more or less precisely captures the actual movements up and down in the Fed funds rate from 2000 to 2008. Then in 2008 we of course hit the Zero Lower Bound. From the Autumn of 2008 the Mankiw rule told us that interest rates should have been cut to somewhere between -4% (the original rule) and -8% (the re-estimated rule). This is of course is what essentially have justified quantitative easing.

Mankiw rule is telling the Fed to hike rates 

Since early 2011 the Mankiw rule – both versions – has been saying that interest rates should become gradually less negative (mostly because the US unemployment rate has been declining) and maybe most interestingly both the original and the re-estimated Mankiw rule is now saying that the Fed should hike interest rates. In fact the re-estimated rule has just within the past couple of months has turned positive for the first time since 2008 and this is really why I am writing the this post.

Maybe we can use the Mankiw rule to understand why the Fed now seems to be moving in a more hawkish direction – we will know more about that later this week at the much anticipated FOMC meeting.

BUT the Mankiw rule is not an optimal rule

I have to admit I like the Mankiw rule for its extreme simplicity and because it is useful in understanding historical Fed policy actions. However, I do certainly not think of the Mankiw rule as an optimal monetary policy rule. Rather my regular readers will of course know that I would prefer that the Fed was targeting the nominal GDP level (something by the way Greg Mankiw also used to advocate) and I would like the Fed to use the money base rather than the Fed funds rate as its primary monetary policy instrument, but that is another story. The purpose here is simply to use the Mankiw rule to understand why the Fed – rightly or wrongly – might move in a move hawkish direction soon.

PS One could argue that the Mankiw rule needs to be adjusted for changes in the natural rate of unemployment, for discourage worker effects and for the apparent “drift” downward in the US core inflation rate since 2008. Those are all valid arguments, but again the purpose here is not to say what is “optimal” – just to use the simple Mankiw rule to maybe understand why the Fed is moving closer to rate hikes.

PPS One could also think of the Mankiw rule a simplistic description of the Evans rule, which the Fed basically announced in September 2012.

Taylor, rules and central bank independence – When Taylor is right and wrong

John Taylor is out with new paper on “The Effectiveness of Central Bank Independence Versus Policy Rules”.

Here is the abstract:

“This paper assesses the relative effectiveness of central bank independence versus policy rules for the policy instruments in bringing about good economic performance. It examines historical changes in (1) macroeconomic performance, (2) the adherence to rules-based monetary policy, and (3) the degree of central bank independence. Macroeconomic performance is defined in terms of both price stability and output stability. Factors other than monetary policy rules are examined. Both de jure and de facto central bank independence at the Fed are considered. The main finding is that changes in macroeconomic performance during the past half century were closely associated with changes the adherence to rules-based monetary policy and in the degree of de facto monetary independence at the Fed. But changes in economic performance were not associated with changes in de jure central bank independence. Formal central bank independence alone has not generated good monetary policy outcomes. A rules-based framework is essential.”

So far I have only run thought very fast, but it looks very interesting and I certainly do agree with the main conclusion that the important thing is a rule-based monetary framework rather than central bank independence. Taylor obviously prefers his own Taylor rule – Market Monetarists including myself prefer NGDP level targeting. Nonetheless getting central banks to follow a rule based monetary policy must be the key objective for monetary policy pundits. That is the view of John Taylor and Market Monetarists alike.

Here is another very interesting paper – “The Influence of the Taylor rule on US monetary policy” – certainly related to Taylor and the Taylor rule.

Here is the abstract:

“We analyze the influence of the Taylor rule on US monetary policy by estimating the policy preferences of the Fed within a DSGE framework. The policy preferences are represented by a standard loss function, extended with a term that represents the degree of reluctance to letting the interest rate deviate from the Taylor rule. The empirical support for the presence of a Taylor rule term in the policy preferences is strong and robust to alternative specifications of the loss function. Analyzing the Fed’s monetary policy in the period 2001-2006, we find no support for a decreased weight on the Taylor rule, contrary to what has been argued in the literature. The large deviations from the Taylor rule in this period are due to large, negative demand-side shocks, and represent optimal deviations for a given weight on the Taylor rule.”

John Taylor has long argued have long argued that the present crisis was a result of the Federal Reserve diverging from the Taylor rule in years just prior to 2008 and that caused a boom-bust in the UK economy. The aforementioned paper by Pelin Ilbas, Øistein Røisland and Tommy Sveen indicate that John Taylor is wrong on that view.

So concluding, John Taylor is right that we need a rule based monetary policy framework, but he is wrong about what rule we need.

HT Jens Pedersen

PS I still find Taylor’s focus on interest rates as a monetary policy instrument both frustrating and very wrong. It might have been the biggest problem with the Taylor rule – that central bankers have been led to think that “the” interest rate is the only instrument at their disposal.

 

NGDP targeting is not a Keynesian business cycle policy

I have come to realize that many when they hear about NGDP targeting think that it is in someway a counter-cyclical policy – a (feedback) rule to stabilize real GDP (RGDP). This is far from the case from case and should instead be seen as a rule to ensure monetary neutrality.

The problem is that most economists and none-economists alike think of the world as a world more or less without money and their starting point is real GDP. For Market Monetarist the starting point is money and that monetary disequilibrium can lead to swings in real GDP and prices.

The starting point for the traditional Taylor rule is basically a New Keynesian Phillips curve and the “input” in the Taylor rule is inflation and the output gap, where the output gap is measured as RGDP’s deviation from some trend. The Taylor rule thinking is basically the same as old Keynesian thinking in the sense that inflation is seen as a result of excessive growth in RGDP. For Market Monetarists inflation is a monetary phenomenon – if money supply growth outpaces money demand growth then you get inflation.

Our starting point is not the Phillips curve, but rather Say’s Law and the equation of exchange. In a world without money Say’s Law holds – supply creates it’s own demand. Said in another way in a barter economy business cycles do not exist. It therefore follows logically that recessions always and everywhere is a monetary phenomenon.

Monetary policy can therefore “create” a business cycle by creating a monetary disequilibrium, however, in the absence of monetary disequilibrium there is no business cycle.

So while economists often talk of “money neutrality” as a positive concept Market Monetarists see monetary neutrality not only as a positive concept, but also as a normative concept. Yes, money is neutral in that sense that higher money supply growth cannot increase RGDP in the long run, but higher money supply growth (than money demand growth) will increase inflation and NGDP in the long run.

However, money is not neutral in the short-run due to for price and wage rigidities and therefore money disequilibrium and monetary disequilibrium can therefore create business cycles understood as a general glut or excess supply of goods and labour. Market Monetarists do not argue that the monetary authorities should stabilize RGDP growth, but rather we argue that the monetary authorities should avoid creating a monetary disequilibrium.

So why so much confusing?

I believe that much of the confusing about our position on monetary policy has to do with the kind of policy advise that Market Monetarist are giving in the present situation in both the US and the euro zone.

Both the euro zone and the US economy is at the presently in a deep recession with both RGDP and NGDP well below the pre-crisis trend levels. Market Monetarists have argued – in my view forcefully – that the reason for the Great Recession is that monetary authorities both in the US and the euro zone have allowed a passive tightening of monetary policy (See Scott Sumner’s excellent paper on the causes of the Great Recession here) – said in another way money demand growth has been allowed to strongly outpaced money supply growth. We are in a monetary disequilibrium. This is a direct result of a monetary policy mistakes and what we argue is that the monetary authorities should undo these mistakes. Nothing more, nothing less. To undo these mistakes the money supply and/or velocity need to be increased. We argue that that would happen more or less “automatically” (remember the Chuck Norris effect) if the central bank would implement a strict NGDP level target.

So when Market Monetarists like Scott Sumner has called for “monetary stimulus” it NOT does mean that he wants to use some artificial measures to permanently increase RGDP. Market Monetarists do not think that that is possible, but we do think that the monetary authorities can avoid creating a monetary disequilibrium through a NGDP level target where swings in velocity is counteracted by changes in the money supply. (See also my earlier post on “monetary stimulus”)

I have previously argued that when a NGDP target is credible market forces will ensure that any overshoot/undershoot in money supply growth will be counteracted by swings in velocity in the opposite direction. Similarly one can argue that monetary policy mistakes can create swings in velocity, which is the same as to say hat monetary policy mistakes creates monetary disequilibrium.

Therefore, we are in some sense to blame for the confusion. We should really stop calling for “monetary stimulus” and rather say “stop messing with Say’s Law, stop creating a monetary disequilibrium”. Unfortunately monetary policy discourse today is not used to this kind of terms and many Market Monetarists therefore for “convenience” use fundamentally Keynesian lingo. We should stop that and we should instead focus on “microsovereignty”

NGDP level targeting ensures microsovereignty

A good way to structure the discussion about monetary policy or rather monetary policy regimes is to look at the crucial difference between what Larry White has termed a “macroinstrumental” approach and a “microsovereignty” approach.

The Taylor rule is a typical example of the macroinstrumental approach. In this approached it is assumed that it is the purpose of monetary policy to “maximise” some utility function for society with includes a “laundry list” of more or less randomly chosen macroeconomic goals. In the Taylor rule this the laundry list includes two items – inflation and the output gap.

The alternative approach to choose a criteria for monetary success (as Larry White states it) is the microsovereignty approach – micro for microeconomic and sovereignty for individual sovereignty.

The microsovereignty approach states that the monetary regime should ensure an institutional set-up that allows individuals to make decisions on consumption, investment and general allocation without distortions from the monetary system. More technically the monetary system should ensure that individuals can “capture” Pareto improvements.

Therefore an “optimal” monetary regime ensures monetary neutrality. Larry White argues that Free Banking can ensure this, while Market Monetarists argue that given central banks exist a NGDP level targeting regime can ensure monetary neutrality and therefore microsovereignty.

This is basically a traditional neo-classical welfare economic approach to monetary theory. We should choose a monetary regime that “maximises” welfare by ensuring individual sovereignty.

A monetary regime that ensures microsovereignty does not have the purpose of stabilising the business cycle, but it will nonetheless be the likely consequence as NGDP level targeting removes or at least strongly reduces monetary disequilibrium and as recessions is a monetary phenomenon this will also strongly reduce RGDP and price volatility. This is, however, a pleasant consequence but not the main objective of NGDP level targeting.

—–

Marcus Nunes has a similar discussion here.

—-

UPDATE: There are two follow up article to this post:

“Be right for the right reasons”

“Roth’s Monetary and Fiscal Framework for Economic Stability”