Revisiting the P-star model

I read Milton Friedman’s book “Free to Choose” at an age of 16 years old and ever since then I have been more or less obsessed with monetary theory and particularly the equation of exchange:


My view of the world obviously has developed over the 32 years since I read “Free to Choose”, but I am still fully convinced that monetary policy failure historically has been the main cause of macroeconomic problems – whether it is inflation or recessions and depressions. In fact I am more so than ever.

When I started studying economics at the University of Copenhagen in the early 1990s my obsession with monetary matters continued. That more or less coincided with the publication of a paper, which had quite an impact on my general thinking of how to empirically think about monetary analysis.

The paper “M2 per unit of potential GNP as an anchor for the price level” was written by Jeffrey J. Hallman, Richard D. Porter and David H. Small and first published in 1989.

In the paper the authors introduced the concept of P-star as a measure of where the price level would be in the “long run” (when monetary velocity and GDP was at their long term equilibrium levels). An updated version of the paper was also published in 1991.

Based on the equation of exchange this price level – P-star or P* – was calculated:

P* = M•V*/Y*

Where M is the present level of some monetary aggregate (in Hallman et al.’s paper M2 for the US), V* is the long-term trend level of money-velocity and Y* is potential GDP.

Hallman et al. argued that the actual price level, P, over time should converge towards P*.

Consequently, the gap between P and P* should be a useful indicator of future inflation. Hence, if P*>P then we should expect inflation to accelerate and if P<P* then inflation should decelerate.

This made a lot of sense in the late 1980s when we where in a situation where the Federal Reserve and other central banks had not formulated their nominal targets in any clear fashion and where monetary policy partly still was conducted through money base control.

However, starting from the early 1990s more and more central banks introduced inflation targeting and operationally started being exclusively focused on conducting monetary policy through interest rate controls it became (gradually!) clear to me that the P-star concept might not be useful any more as the price level would be anchored by the inflation target alone and causality in the model would be turned around and velocity would hence become a function of the inflation target.

Therefore, I more or less gave up on the P-star model and only occasionally revisited it when doing analysis of different Emerging and ‘Frontier’ markets, but for some reason my previous post on the McCallum rule made me think it could be fun to have a look at the P-star model using the ‘outside base’ (the money base minus excess reserves) as a measure of M in the model.

So this is what I am going to do in this blog post.

Calculating P-star based on the outside base

To calculate P-star we need an estimate of V* and Y*.

Y* is simply potential GDP and I here I use CBO’s estimate of potential GDP, which I get from St. Louis Fed’s FRED database.

In terms of V* I calculated V based on actual nominal GDP and the outside base (V=NGDP/Outside base).

And then I have de-trended that. I could have used a HP-filter or something similar, but instead I simply estimated a trend based on a linear, squadric and an inverse trend of V.

Velocity trend.jpg

So now I have both V* and Y* and using the outside base as a measure of M I can calculate P*.

The graph below shows my measure of US P* since 1984 and the actual price level P (the GDP deflator).


The orange line, P, is the actual price level while the blue line is P* (P-star). The gap (%) between the two, is the green bars (p-gap).

One can think of the p-gap as a measure of excess liquidity in the economy and when p-gap is positive (negative) monetary conditions are ‘easy’ (‘tight’) and one should expect nominal demand and inflation to pick up (slow down).

It is notable that the p-gap turned negative ahead of the US recessions of ’90-’91, 2000-1 and 2008-9 and in that sense has been a reliable leading indicator of recessions in the US for more than three decades.

But is it also a reliable indicator of inflation?

The simple answer is YES.

To test this I run a simple OLS-regression.

dp(t) = a + b•pE(t-1) + c•p-gap(t-4)

Where dp(t) is the quarterly change in the price level, pE(t) is inflation expectations of consumers (University of Michigan survey) and p-gap(t-4) is the price gap 4 quarters earlier. a, b and c are coefficients.

And here is the model output.

Model p-star.jpg

This isn’t rocket science and will certainly not win me the Nobel Prize, but it is good enough for a blog post. What we see here is that p-gap is statistically significant and hence can be used to predict changes in inflation.

This makes me think that p-star could be a useful indicator for inflation also going forward and maybe it deserves a bit more attention that it has been getting – at for the past 20 year.

In fact I am tempted to say that p-star is no worse (or better) an indicator of the inflationary outlook than i was back in 1989 when it was first suggested.

I will try to do a bit more work on the p-star model going forward and maybe try to model p-star with other money supply measures and maybe for the euro zone as well.


The McCallum rule is back – and so am I

It has been some time since I posted anything on The Market Monetarist – primarily because I have been doing other thing – among other things been running my consultancy Markets & Money Advisory (which I still do) and for a year have been the editor-in-chief of the Danish financial website Euroinvestor (which I no longer do).

However, I missed blogging and I have particularly missed having an outlet for my (casual?) thinking on monetary matters.

Consequently, I have reluctantly decided that I want to start blogging a bit again on The Market Monetarist.

How much I will be blogging the in the future is unclear as I also have to make a living doing other things – continuing my consultancy working (on international economics, markets and money), academic work as well as doing a lot of speaking engagements.

If you are interesting in getting in contact with me regarding these topics then feel free to drop me a mail on

But now back to the monetary blogging.

The Fed has de facto targeted 4% NGDP growth since 2010

Officially the Federal Reserve targets to 2% inflation (measured as PCE core inflation). However, looking at the numbers it is clear that the Fed has consistently failed to deliver on this target.

Instead it actually seems like the Fed – consciously or not – have followed a nominal GDP level targeting rule as long favoured by market monetarists like Scott Sumner and David Beckworth and of course myself.

Or rather the Fed has done so after the 2008-9 crisis hit – exactly because the Fed failed to maintain the de facto NGDP level targeting regime of the pre-2008 Great Moderation period.

Roughly speaking the Fed was de facto targeting 5% NGDP growth from 2000 until 2007-8 and 4% NGDP growth since 2010 as the graph below illustrates.


In it is rather remarkable just how close actual US nominal GDP has been to a 4% NGDP path since the beginning of 2010.

I personally, therefore, also think that the Fed should finally acknowledge this and replace its inflation target with a 4% NGDP level target – from the present level of GDP.

But what about the instrument?

In the past 25 years or so it has become the norm to think in terms of the conduct of monetary policy in the terms of the so-called Taylor rule as first proposed by John Taylor back in 1993.

Just to refresh the readers memory – this is the Taylor rule:

r = p + .5y + .5(p-2) + 2

Where r is the monetary policy interest rate “set” by the central bank, p is the rate of inflation and y is the output gap. The first “2” refers to the inflation target (assumed by Taylor to be 2%) and the second “2” refers to the natural interest rate (also assumed by Taylor to be 2%)

The Taylor rule was meant to be a simple representation of how the Fed actually conducted monetary policy, but later certainly also by many central banks – including the Fed – has been seen as a rule for how central banks actually should conduct monetary policy.

There is nothing surprising about this as the Taylor rule essentially have been seen by central banks as a way to implement the inflation targets, which was introduced by central banks around the world since the early 1990’s.

However, a lot of things have happened since the 1990’s.

First of all the natural interest rate likely is not 2% in the US – it is much lower.

Second, the Fed’s 2% inflation target is not being hit consistently and more and more monetary commentators are questioning whether an inflation target is a good idea in the first place and whether it should be 2%.

That all have to do with the right-hand side of the equation, but what about the left hand side. Why is it just assumed that the central bank “sets” the interest rate?

Monetarists and in recent year market monetarists have argued that the central bank in fact is not determining interest rates – or at least that central banks cannot maintain an interest rate, which is different from what essentially is the natural rate without either causing a sharp rise in inflation or a recession.

Consequently, monetarists such as Milton Friedman argued that central banks primarily should conduct monetary policy by controlling the growth rate of the money base and leave the determination of interest rates to the market.

This view of course over the last decade has made somewhat of a comeback as central banks have been forced by events – or rather by the simple fact that the natural interest rate has dropped significantly – to look at alternatives to interest rate “targeting” as the monetary policy instrument.

However, no central bank anywhere has taken the consequence of this and switched from interest rate controls to monetary base control – at least not consistently.

That said, when Taylor first introduced his rule there certainly was not a consensus that this was the right way to do things as a central banker.

A competing rule to the Taylor rule was the so-called McCallum rule first suggested by Bennett McCallum in 1987.

The McCallum Rule – time for a comeback

The McCallum rule is hardly taught at universities these days and my guess is that few central bankers know what the McCallum rule is, but it might nonetheless be time for a comeback for the McCallum rule. In fact it might already have made a comeback.

But lets first have a look at the McCallum rule:

b = x* – v* + .5(x* – x(t-1))

Where b is the quarterly growth rate of the money base, v* is the trend quarterly growth money base velocity (16 quarters moving average) and x* is the targeted quarterly growth rate of nominal GDP, while x(t-1) is the quarterly growth rate of nominal GDP in the previous quarter.

One can see why the McCallum rule should be of interest to present day practitioners and observers of monetary policy.

First of all, due to the zero lower bound issue with interest rates and uncertainty regarding the actual level of the natural interest rate money base control has become necessary even though central banks are very reluctant to acknowledge this.

Second, as I discussed above it looks like the Federal Reserve has been targeting NGDP rather than inflation and it is therefore natural to actually focus on a monetary policy rule where we have nominal GDP – rather than inflation – on the right hand side of the equation.

The question is – how has actual Fed policy been compared to a McCallum rule.

I have tested that by a simple simulation of the McCallum rule using the parametres suggested by McCallum and assumed that the Fed had a 5% NGDP growth target from 2000 to 2010 and then from 2010 and until today a 4% NGDP growth target.

I have, however, made one adjustment. Since, 2008 the Fed has paid interest rates on excess reserves held at the Fed, which greatly have distorted the money base numbers. I therefore, have constructed a series for what Jeff Hummel has called “outside money base”, which is the money base minus excess reserves.

This is how the actual development in outside money base quarterly growth (4qma) compares to the McCallum rule.

McCallum rule

It should be noted that this is not an estimated, but rather a simulated relationship based on the parametres suggested more than 30 years ago and it might obviously be possible to get a better fit, but the point here is that the McCallum rule does a remarkably good job in tracking actual monetary policy in the US both prior to and after the Great Recession hit in 2008-9.

In fact it is notable that it seems like the McCallum rule has been an even better indicator after 2008-9 than before – the difference between actual money base growth and the rule has been small after 2008-9 than before.

If we look at the difference between the rule and the actual money base growth we get a simple measure of excessively easy or tight monetary policy and we can use this to evaluate US monetary policy over the past decade.

We can for example see that the Fed was overly eager to “normalize” monetary policy in 2010 and hence caused money base growth to slow too much.

Likewise in 2011-13 during the euro crisis money base growth was slightly to slow and finally Janet Yellen’s obsession with the Phillips curve and the labour market caused the Fed to allow money base growth to slow too much.

Contrary to this, Fed chief Jay Powell was right slowing money base growth in 2017-18, but we can also see he in 2019 has overdone a bit and presently money base growth remains slightly too slow (around 4.5% y/y) compared to what the McCallum rule tells us it should be (6-6.5% y/y).

Time for the Fed to be serious about money base control

Concluding, we can see that using the McCallum rule as an indicator of monetary stance will be helpful and while I do not necessarily think the Fed should introduce a McCullum rule I nonetheless think the Fed – and Fed watchers – should pay a lot more attention to the McCallum rule and other similar money base rule.

Furthermore, I think it is about time that the Fed acknowledge that low rates are here to stay (it’s structural, stupid!) and consequently should think about how to start using the (outside) money base as the primary monetary policy instrument rather than continuing to mess around with interest rate targeting.

Furthermore, the Fed should make it de facto 4% NGDP target official and hence, use operational targets for permanent money base growth to hit this target.

This might seem a bit revolutionary, but when the market monetarists a decade ago started arguing for NGDP targeting that also sounded crazy. Now it has been the decade policy for nearly a decade.

Let me hear what you think of my comeback blog post and remember to follow me on Twitter.

And finally some sad news. One of my great heroes Marvin Goodfriend – long-time economist at the Richmond Fed and Professor at Carnegie Mellon University’s Tepper School of Business – has passed away. Marvin without a doubt was one of the greatest monetary thinkers of his generation and he will be greatly missed. (Remembering Marvin Goodfriend)




%d bloggers like this: