Let the Fed target a Quasi-Real PCE Price Index (QRPCE)

The Federal Reserve on Wednesday said it would target a long-run inflation target of 2%. Some of my blogging Market Monetarist friends are not too happy about this – See Scott Sumner and Marcus Nunes. But I have an idea that might bring the Fed very close to the Market Monetarist position without having to go back on the comments from Wednesday.

We know that the Fed’s favourite price index is the deflator for Private Consumption Expenditure (PCE) for and the Fed tends to adjust this for supply shocks by referring to “core PCE”. Market Monetarists of course would welcome that the Fed would actually targeting something it can influence directly and not react to positive and negative supply shocks. This is kind of the idea behind NGDP level targeting (as well as George Selgin’s Productivity Norm).

Instead of using the core PCE I think the Fed should decomposed the PCE deflator between demand inflation and supply by using a Quasi Real Price Index. I have spelled out how to do this in an earlier post.

In my earlier post I show that demand inflation (pd) can be calculated in the following way:

(1) Pd=n-yp

Where n is nominal GDP growth and yp is trend growth in real GDP.

Private Consumption Expenditure growth and NGDP growth is extremely highly correlated over time and the amplitude in PCE and NGDP growth is nearly exactly the same. Therefore, we can easily calculate Pd from PCE:

(2) Pd=pce-yp

Where pce is the growth rate in PCE. An advantage of using PCE rather than NGDP is that the PCE numbers are monthly rather than quarterly which is the case for NGDP.

Of course the Fed is taking about the “long-run”. To Market Monetarists that would mean that the Fed should target the level rather growth of the index. Hence, we really want to go back to a Price Index.

If we write (2) in levels rather than in growth rates we basically get the following:

(3) QRPCE=PCE/RGDP*

Where QRPCE is what we could term a Quasi-Real PCE Price Index, PCE is the nominal level of Private Consumption Expenditure and RGDP* is the long-term trend in real GDP. Below I show a graph for QRPCE assuming 3% RGDP in the long-run. The scale is natural logarithm.

I have compared the QRPCE with a 2% trend starting the 2000. The starting point is rather arbitrary, but nonetheless shows that Fed policy ensured that QRPCE grew around a 2% growth path in the half of the decade and then from 2004-5 monetary policy became too easy to ensure this target. However, from 2008 QRPCE dropped sharply below the 2% growth path and is presently around 9% below the “target”.

So if the Fed really wants to use a price index based on Private Consumption Expenditure it should use a Quasi-Real Price Index rather than a “core” measure and it should of course state that long-run inflation of 2% means that this target is symmetrical which means that it will be targeting the level for the price index rather the year-on-year growth rate of the index. This would effectively mean that the Fed would be targeting a NGDP growth path around 5% but it would be packaged as price level targeting that ensures 2% inflation in the long run. Maybe Fed chairman Bernanke could be convince that QRPCE is actually the index to look at rather than PCE core? Packaging actually do matter in politics – and maybe that is also the case for monetary policy.

It’s time to get rid of the ”representative agent” in monetary theory

“Tis vain to talk of adding quantities which after the addition will continue to be as distinct as they were before; one man’s happiness will never be another man’s happiness: a gain to one man is no gain to another: you might as well pretend to add 20 apples to 20 pears.”

Jeremy Bentham, 1789

I have often felt that modern-day Austrian economists are fighting yesterday’s battles. They often seem to think that mainstream economists think as if they were the “market socialists” of the 1920s and that the “socialist-calculation-debate” is still on-going. I feel like screaming “wake up people! We won. No economist endorses central planning anymore!”

However, I am wrong. The Austrians are right. Many economists still knowingly or out of ignorance today endorse some of the worst failures of early-day welfare theory. Economists have known since the time of Jeremy Bentham that one man’s happiness can not be compared to another man’s happiness. Interpersonal utility comparison is a fundamental no-no in welfare theory. We cannot and shall not compare one person’s utility with another man’s utility. But this is exactly what “modern” monetary theorists do all the time.

Take any New Keynesian model of the style made famous by theorists like Michael Woodford. In these models the central banks is assumed to be independent (and benevolent). The central banker sets interest rates to minimize the “loss function” of a “representative agent”. Based on this kind of rationalisation economists like Woodford find theoretical justification for Taylor rule style monetary policy functions.

Nobody seems to find this problematic and it is often argued that Woodford even has provided the microeconomic foundation for these loss functions. Pardon my French, but that is bullsh*t. Woodford assumes that there is a representative agent. What is that? Imagine we introduced this character in other areas of economic research? Most economists would find that highly problematic.

There is no such thing as a representative agent. Let me illustrate it. The economy is hit by a negative shock to nominal GDP. With Woodford’s representative agent all agents in the economy is hit in the same way and the loss (or gain) is the same for all agents in the economy. No surprise – all agents are assumed to be the same. As a result there is no conflict between the objectives of different agents (there is basically only one agent).

But what if there are two agents in the economy. One borrower and one saver. The borrower is borrowing from the other agent at a fixed nominal interest rate. If nominal GDP drops then that will effectively be a transfer of wealth from the borrower to the saver.

This might of course of course make the Calvinist ideologue happy, but what would the modern day welfare theorist say?

The modern welfare theorist would of course apply a Pareto criterion to the situation and argue that only a monetary policy rule that ensures Pareto efficiency is a good monetary policy rule: An allocation is Pareto efficient if there is no other feasible allocation that makes at least one party better off without making anyone worse off. Hence, if the nominal GDP drops and lead to a transfer of wealth from one agent to another then a monetary policy that allows this does not ensure Pareto efficiency and is hence not an optimal monetary policy.

David Eagle has shown in a number of papers that only one monetary policy rule can ensure Pareto efficiency and that is NGDP level targeting (See David’s guest posts here, here and here). All other policy rules, inflation targeting, Price level targeting and NGDP growth targeting are all Pareto inefficient. Price level targeting, however, also ensures Pareto efficiency if there are no supply shocks in the economy.

This result is significantly more important than any result of New Keynesian analysis of monetary policy rules with a representative agent. Analysis based on the assumption of the representative agent completely fails to tell us anything about the present economic situation and the appropriate response to the crisis. Just think whether a model with a “representative country” in the euro zone or one with Greece (borrower) and Germany (saver) make more sense.

It is time to finally acknowledge that Bentham’s words also apply to monetary policy rules and finally get rid of the representative agent.

——

For a much more insightful and clever discussion of this topic see David Eagle’s paper “Pareto Efficiency vs. the Ad Hoc Standard Monetary Objective – An Analysis of Inflation Targeting” from 2005.

Allan Meltzer’s great advice for the Federal Reserve

Here is Allan Meltzer’s great advice on US monetary policy:

“Repeatedly, the message has been to reduce tax rates permanently… A permanent tax cut was supposed to do what previous fiscal efforts had failed to do — generate sustained expansion of the American economy. 

No one should doubt that an expansion is desirable for US… and the rest of the world…The US government has watched the economy stagnate much too long. A policy change is long overdue. 

The problem with the advice (about fiscal easing) is that few would, and none should, believe that the US can reduce tax rates permanently. US has run big budget deficits for the past five years and accumulated a large debt that must be serviced at considerably higher interest rates in the future … And the US must soon start to finance large prospective deficits for old age pensions and health care. There is no way to finance these current and future liabilities that will not involve higher future tax rates… 

It is wrong when somebody tells the American to maintain the value of the dollar…The fluctuating rate system should work both ways. Strong economies appreciate; weak economies depreciate. 

What is the alternative? Deregulation is desirable, but it will do its work slowly. If temporary tax cuts are saved, not spent, and permanent tax cuts are impossible, the US choice is between devaluation and renewed deflation. The deflationary solution runs grave risks. Asset prices would continue to fall. Investors anticipating further asset price declines would have every reason to hold cash and wait for better prices. The fragile banking system would face larger losses as asset prices fell. 

Monetary expansion and devaluation is a much better solution. An announcement by the Federal Reserve and the government that the aim of policy is to prevent deflation and restore growth by providing enough money to raise asset prices would change beliefs and anticipations. Rising asset prices, including land and property prices, would revive markets for these assets once the public became convinced that the policy would be sustained. 

The volume of “bad loans” at US banks is not a fixed sum. Rising asset prices would change some loans from bad to good, thereby improving the position of the banking system. Faster money growth would add to the banks’ ability to make new loans, encouraging business expansion.

This program can work only if the exchange rate is allowed to depreciate. Five years of lowering interest rates has shown that there is no way to maintain the exchange rate and generate monetary expansion…

…Some will see devaluation as an attempt by the US to expand through exporting. This is a half-truth. Devaluation will initially increase US exports and reduce imports. As the economy recovers, incomes will rise. Rising incomes are the surest way of generating imports of raw materials and sub-assemblies from US trading partners.

Let money growth increase until asset prices start to rise.”

I think Allan Meltzer as a true monetarist presents a very strong case for US monetary easing and at the same time acknowledges that fiscal policy is irrelevant. Furthermore, Meltzer makes a forceful argument that if monetary policy is eased then that would significantly ease financial sector distress. The readers of my blog should not be surprised that Allan Meltzer always have been one of my favourite economists.

Meltzer indirectly hints that he wants the Federal Reserve to target asset prices. I am not sure how good an idea that is. After all what asset prices are we talking about? Stock prices? Bond prices? Or property prices? Much better to target the nominal GDP target level, but ok stock prices do indeed tend to forecast the future NGDP level pretty well.

OK, I admit it…I have been cheating! Allan Meltzer did indeed write this (or most of it), but he as not writing about the US. He was writing about Japan in 1999 (So I changed the text a little). It would be very interesting hearing why Dr. Meltzer thinks monetary easing is wrong for the US today, but right for Japan in 1999. Why would Allan Meltzer be against a NGDP target rule that would bring the US NGDP level back to the pre-crisis trend and then there after target a 3%, 4% or 5% growth path as suggested by US Market Monetarists such as Scott Sumner, Bill Woolsey and David Beckworth?

 

There is no such thing as fiscal policy – and that goes for Japan as well

Scott Sumner has a comment on Japan’s ”lost decades” and the importance of fiscal policy in Japan. Scott acknowledges based on comments from Paul Krugman and Tim Duy that in fact Japan has not had two lost decades. Scott also discusses whether fiscal policy has been helpful in reviving growth in the past decade in Japan.

I have written a number of comments on Japan (see here, here and here).

I have two main conclusions in these comments:

1)   Japan only had one “lost decade” and not two. The 1990s obviously was a disaster, but over the past decade Japan has grown in line with other large developed economies when real GDP growth is adjust for population growth. (And yes, 2008 was a disaster in Japan as well).

2)   Monetary policy is at the centre of these developments. Once the Bank of Japan introduced Quantitative Easing Japan pulled out of the slump (Until BoJ once again in 2007 gave up QE and allowed Japan to slip back to deflation). Se especially my post “Japan shows QE works”.

This graph of GDP/capita in the G7 proves the first point.

Second my method of decomposition of demand and supply inflation – the so-called Quasi-Real Price Index – shows that once Bank of Japan in 2001 introduced QE Japanese demand deflation eased and from 2004 to 2007 the deflation in Japan only reflected supply deflation while demand inflation was slightly positive or zero. This coincided with Japanese growth being revived. The graph below illustrates this.

Obviously the Bank of Japan’s policies during the past decades have been far from optimal, but the experience clearly shows that monetary policy is very powerful and even BoJ’s meagre QE program was enough to at least bring back growth to the Japanese economy.

Furthermore, it is clear that Japan’s extremely weak fiscal position to a large extent can be explained by the fact that BoJ de facto has been targeting 0% NGDP growth rather than for example 3% or 5% NGDP growth. I basically don’t think that there is a problem with a 0% NGDP growth path target if you start out with a totally unleveraged economy – one can hardly say Japan did that. The problem is that BoJ changed its de facto NGDP target during the 1990s. As a result public debt ratios exploded. This is similar to what we see in Europe today.

So yes, it is obvious that Japan can’t not afford “fiscal stimulus” – as it today is the case for the euro zone countries. But that discussion in my view is totally irrelevant! As I recently argued, there is no such thing as fiscal policy in the sense Keynesians claim. Only monetary policy can impact nominal spending and I strongly believe that fiscal policy has very little impact on the Japanese growth pattern over the last two decades.

Above I have basically added nothing new to the discussion about Japan’s lost decade (not decades!) and fiscal and monetary policy in Japan, but since Scott brought up the issue I thought it was an opportunity to remind my readers (including Scott) that I think that the Japanese story is pretty simple, but also that it is wrong that we keep on talking about Japan’s lost decades. The Japanese story tells us basically nothing new about fiscal policy (but reminds us that debt ratios explode when NGDP drops), but the experience shows that monetary policy is terribly important.

——–

PS I feel pretty sure that if the Bank of Japan and the ECB tomorrow announced that they would target an increase in NGDP of 10 or 15% over the coming two years and thereafter would target a 4% NGDP growth path then all talk of “lost decades”, the New Normal and fiscal crisis would disappear very fast. Well, the same would of course be true for the US.

Guest Blog: The Two Fundamental Welfare Principles of Monetary Economics (By David Eagle)

I am extremely happy that David Eagle is continuing his series of guest blogs on my blog.

I strongly believe that David’s ideas are truly revolutionary and anybody who takes monetary policy and monetary theory serious should study  these ideas carefully. In this blog David presents what he has termed the “Two Fundamental Welfare Principles of Monetary Economics” as an clear alternative to the ad hoc loss functions being used in most of the New Keynesian monetary literature.

To me David Eagle here provides the clear microeconomic and welfare economic foundation for Market Monetarism. David’s thinking and ideas have a lot in common with George Selgin’s view of monetary theory – particularly in “Less than zero” (despite their clear methodological differences – David embraces math while George use verbal logic). Anybody that reads and understands David’s and George’s research will forever abandon the idea of a “Taylor function” and New Keynesian loss functions.

Enjoy this long, but very, very important blog post.

Lars Christensen

—————–

Guest Blog: The Two Fundamental Welfare Principles of Monetary Economics

by David Eagle

Good Inflation vs. Bad Inflation

At one time, doctors considered all cholesterol as bad.  Now they talk about good cholesterol and bad cholesterol.  Today, most economists considered all inflation uncertainty as bad, at least all core inflation uncertainty.  However, some economists including George Selgin (2002), Evan Koenig (2011), Dale Domian, and myself (and probably most of the market monetarists) believe that while aggregate-demand-caused inflation uncertainty is bad, aggregate-supply-caused inflation or deflation actually improves the efficiency of our economies.  Through inflation or deflation, nominal contracts under Nominal GDP (NGDP) targeting naturally provide the appropriate real-GDP risk sharing between borrowers and lenders, between workers and employers, and more generally between the payers and receivers of any prearranged nominal payment.  Inflation targeting (IT), price-level targeting (PLT), and conventional inflation indexing actually interfere with the natural risk sharing inherent in nominal contracts.

I am not the first economist to think this way as George Selgin (2002, p. 42) reports that “Samuel Bailey (1837, pp. 115-18) made much the same point.”  Also, the wage indexation literature that originated in the 1970s, makes the distinction between demand-induced inflation shocks and supply-induced inflation shocks, although that literature did not address the issue of risk sharing.

The Macroeconomic Ad Hoc Loss Function vs. Parerto Efficiency

The predominant view of most macroeconomists and monetary economists is that all inflation uncertainty is bad regardless the cause.  This view is reflected in the ad hoc loss function that forms the central foundation for conventional macroeconomic and monetary theory.  This loss function is often expressed as a weighted sum of the variances of (i) deviations of inflation from its target and (ii) output gap.  Macro/monetary economists using this loss function give the impression that their analyses are “scientific” because they often use control theory to minimize this function.  Nevertheless, as Sargent and Wallace (1975) noted, the form of this loss function is ad hoc; it is just assumed by the economist making the analysis.

I do not agree with this loss function and hence I am at odds with the vast majority of the macro/monetary economists.  However, I have neoclassical microfoundations on my side – our side when we include Selgin, Bailey, Koenig, Domian, and many of the market monetarists.  This ad hoc social loss function used as the basis for much of macroeconomic and monetary theory is basically the negative of an ad hoc social utility function.  The microeconomic profession has long viewed the Pareto criterion as vastly superior to and more “scientific” than ad hoc social utility functions that are based on the biased preconceptions of economists.  By applying the Pareto criterion instead of a loss function, Dale Domian and I found what I now call, “The Two Fundamental Welfare Principles of Monetary Economics.”  My hope is that these Fundamental Principles will in time supplant the standard ad hoc loss function approach to macro/monetary economics.

These Pareto-theoretic principles support what George Selgin (p. 42) stated in 2002 and what Samuel Bailey (pp. 115-18) stated in 1837.  Some economists have dismissed Selgin’s and Bailey’s arguments as “unscientific.”  No longer can they legitimately do so.  The rigorous application of neoclassical microeconomics and the Pareto criterion give the “scientific” support for Selgin’s and Bailey’s positions.  The standard ad-hoc-loss-function approach in macro- and monetary economics, on the other hand, is based on pulling this ad hoc loss function out of thin air without any “scientific” microfoundations basis.

Macroeconomists and monetary economists have applied the Pareto criterion to models involving representative consumers.  However, representative consumers miss the important ramifications of monetary policy on diverse consumers.  In particular, models of representative consumers miss (i) the well-known distributional effect that borrowers and lenders are affected differently when the price level differs from their expectations, and (ii) the Pareto implications about how different individuals should share in changes in RGDP.

The Two Direct Determinants of the Price Level

Remember the equation of exchange (also called the “quantity equation), which says that MV=N=PY where M is the money supply, V is income velocity, N is nominal aggregate spending as measured by nominal GDP, P is the price level, and Y is aggregate supply as measured by real GDP.  Focusing on the N=PY part of this equation and solving for P, we get:

(1) P=N/Y

This shows there are two and only two direct determinants of the price level:

(i)             nominal aggregate spending as measured by nominal GDP, and

(ii)           aggregate supply as measured by real GDP.

This also means that these are the two and only two direct determinants of inflation.

The Two Fundamental Welfare Principles of Monetary Economics

When computing partial derivatives in calculus, we treat one variable as constant while we vary the other variable.  Doing just that with respect to the direct determinants of the price level leads us to The Two Fundamental Welfare Principles of Monetary Economics:

Principle #1:    When all individuals are risk averse and RGDP remains the same, Pareto efficiency requires that each individual’s consumption be unaffected by the level of NGDP.

Principle #2:    For an individual with average relative risk aversion, Pareto efficiency requires that individual’s consumption be proportional to RGDP.[1]

Dale Domian and I (2005) proved these two principles for a simple, pure-exchange economy without storage, although we believe the essence of these Principles go well beyond pure-exchange economies and apply to our actual economies.

My intention in this blog is not to present rigorous mathematical proofs for these principles.  These proofs are in Eagle and Domian (2005).  Instead, this blog presents these principles, discusses the intuition behind the principles, and gives examples applying the principles.

Applying the First Principle to Nominal Loans:

I begin by applying the First Principle to borrowers and lenders; this application will give the sense of the logic behind the First Principle.  Assume the typical nominal loan arrangement where the borrower has previously agreed to pay a nominal loan payment to the lender at some future date.  If NGDP at this future date exceeds its expected value whereas RGDP is as expected, then the price level must exceed its expected level because P=N/Y.  Since the price level exceeds its expected level, the real value of the loan payment will be lower than expected, which will make the borrower better off and the lender worse off.  On the other hand, if NGDP at this future date is less than its expected value when RGDP remains as expected, then the price level will be less than expected, and the real value of the loan payment will be higher than expected, making the borrower worse off and the lender better off.  A priori both the borrower and the lender would be better off without this price-level risk.  Hence, a Pareto improvement can be made by eliminating this price-level risk.

One way to eliminate this price-level risk is for the central bank to target the price level, which if successful will eliminate the price-level risk; however, doing so will interfere will the Second Principle as we will explain later.  A second way to eliminate this price-level risk when RGDP stays the same (which is when the First Principle applies), is for the central bank to target NGDP; as long as both NGDP and RGDP are as expected, the price level will also be as expected, i.e., no price-level risk..

Inflation indexing is still another way to eliminate this price-level risk.  However, conventional inflation indexing will also interfere with the Second Principle as we will soon learn.

That borrowers gain (lose) and lenders lose (gain) when the price level exceeds (fall short of) its expectations is well known.  However, economists usually refer to this as “inflation risk.” Technically, it is not inflation risk; it is price-level risk, which is especially relevant when we are comparing inflation targeting (IT) with price-level targeting (PLT).

An additional clarification that the First Principle makes clear concerning this price-level risk faced by borrowers and lenders is that risk only applies as long as RGDP stays the same.  When RGDP changes, the Second Principle applies.

Applying the Second Principle to Nominal Loans under IT, PLT, and NT:

The Second Principle is really what differentiates Dale Domian’s and my position and the positions of Bailey, Selgin, and Koenig from the conventional macroeconomic and monetary views.  Nevertheless, the second principle is really fairly easy to understand.  Aggregate consumption equals RGDP in a pure exchange economy without storage, capital, or government.  Hence, when RGDP falls by 1%, aggregate consumption must also fall by 1%.  If the total population has not changed, then average consumption must fall by 1% as well.  If there is a consumer A whose consumption falls by less than 1%, there must be another consumer B whose consumption falls by more than 1%.  While that could be Pareto justified if A has more relative risk aversion than does B, when both A and B have the same level of relative risk aversion, their Pareto-efficient consumption must fall by the same percent.  In particular, when RGDP falls by 1%, then the consumption level of anyone with average relative risk aversion should fall by 1%.  (See Eagle and Domian, 2005, and Eagle and Christensen, 2012, for the basis of these last two statements.)

My presentation of the Second Principle is such that it focuses on the average consumer, a consumer with average relative risk aversion.  My belief is that monetary policy should do what is optimal for consumers with average relative risk aversions rather than for the central bank to second guess how the relative-risk-aversion coefficients of different groups (such as borrowers and lenders) compare to the average relative risk aversion.

Let us now apply the Second Principle to borrowers and lenders where we assume that both the borrowers and the lenders have average relative risk aversion.  (By the way “relative risk aversion” is a technical economic term invented Kenneth Arrow, 1957, and John Pratt, 1964.)  Let us also assume that the real net incomes of both the borrower and the lender other than the loan payment are proportional to RGDP.  Please note that this assumption really must hold on average since RGDP is real income.  Hence, average real income = RGDP/m where m is the number of households, which means average real income is proportional to RGDP by definition (the proportion is 1/m).

The Second Principle says that since both the borrower and lender have average relative risk aversion, Pareto efficiency requires that both of their consumption levels must be proportional to RGDP.  When their other real net incomes are proportional to RGDP, their consumption levels can be proportional to RGDP only if the real value of their nominal loan payment is also proportional to RGDP.

However, assume the central bank successfully targets either inflation or the price level so that the price level at the time of this loan payment is as expected no matter what happens to RGDP.  Then the real value of this loan payment will be constant no matter what happens to RGDP.  That would mean the lenders will be guaranteed this real value of the loan payment no matter what happens to RGDP, and the borrowers will have to pay that constant real value even though their other net real incomes have declined when RGDP declined.  Under successful IT or PLT, borrowers absorb the RGDP risk so that the lenders don’t have to absorb any RGDP risk.  This unbalanced exposure to RGDP risk is Pareto inefficient when both borrowers and lenders have average relative risk aversion as the Second Principle states.

Since IT and PLT violate the Second Principle, we need to search for an alternative targeting regime that will automatically and proportionately adjust the real value of a nominal loan payment when RGDP changes?  Remember that the real value of the nominal loan payment is xt=Xt/P.  Replace P with N/Y to get xt=(Xt/Nt)Yt, which means the proportion of xt to Yt equals Xt/Nt.  When Xt is a fixed nominal payment, the only one way for the proportion Xt/Nt to equal a constant is for Nt to be a known in advance.  That will only happen under successful NGDP targeting.

What this has shown is that the proportionality of the real value of the loan payment, which is needed for the Pareto-efficient sharing of RGDP risk for people with average relative risk aversion, happens naturally with nominal fixed-payment loans under successful NGDP targeting.  When RGDP decreases (increases) while NGDP remains as expected by successful NGDP targeting, the price level increases (decreases), which decreases (increases) the real value of the nominal payment by the same percentage by which RGDP decreases (increases).

The natural ability of nominal contracts (under successful NGDP targeting) to appropriately distribute the RGDP risk for people with average relative risk aversion pertains not just to nominal loan contracts, but to any prearranged nominal contract including nominal wage contracts.  However, inflation targeting and price-level targeting will circumvent the nominal contract’s ability to appropriate distribute this RGDP risk by making the real value constant rather than varying proportionately with RGDP.

Inflation Indexing and the Two Principles:

Earlier in this blog I discussed how conventional inflation indexing could eliminate that price-level risk when RGDP remains as expected, but NGDP drifts away from its expected value.  While that is true, conventional inflation indexing leads to violations in the Second Principle.  Consider an inflation indexed loan when the principal and hence the payment are adjusted for changes in the price level.  Basically, the payment of an inflation-indexed loan would have a constant real value no matter what, no matter what the value of NGDP and no matter what the value of RGDP.  While the “no matter what the value of NGDP” is good for the First Principle, the “no matter what the value of RGDP” is in violation of the Second Principle.

What is needed is a type of inflation indexing that complies with both Principles.  That is what Dale Domian’s and my “quasi-real indexing” does.  It adjusts for the aggregate-demand-caused inflation, but not to the aggregate-supply-caused inflation that is necessary for the Pareto-efficient distribution of RGDP among people with average relative risk aversion.

Previous Literature:

Up until now, I have just mentioned Bailey (1837) and Selgin (2002) without quoting them.  Now I will quote them.  Selgin (2002, p. 42) states, ““ …the absence of unexpected price-level changes” is “a requirement … for avoiding ‘windfall’ transfers of wealth from creditors to debtors or vice-versa.”  This “argument … is perfectly valid so long as aggregate productivity is unchanging. But if productivity is subject to random changes, the argument no longer applies.”  When RGDP increases causing the price level to fall, “Creditors will automatically enjoy a share of the improvements, while debtors will have no reason to complain: although the real value of the debtors’ obligations does rise, so does their real income.”

Also, Selgin (2002, p. 41) reports that “Samuel Bailey (1837, pp. 115-18) made much the same point.  Suppose … A lends £100 to B for one year, and that prices in the meantime unexpectedly fall 50 per cent. If the fall in prices is due to a decline in spending, A obtains a real advantage, while B suffers an equivalent loss. But if the fall in prices is due to a general improvement in productivity, … the enhanced real value of B’s repayment corresponds with the enhanced ease with which B and other members of the community are able to produce a given amount of real wealth. …Likewise, if the price level were … to rise unexpectedly because of a halving of productivity, ‘both A and B would lose nearly half the efficiency of their incomes’, but ‘this loss would arise from the diminution of productive power, and not from the transfer of any advantage from one to the other’.”

The wage indexation literature as founded by Grey and Fischer recognized the difference between unexpected inflation caused by aggregate-demand shocks and aggregate-supply shocks; the main conclusion of this literature is that when aggregate-supply shocks exist, partial rather than full inflation indexing should take place.  Fischer (1984) concluded that the ideal form of inflation indexing would be a scheme that would filter out the aggregate-demand-caused inflation but leave the aggregate-supply-caused inflation intact.  However, he stated that no such inflation indexing scheme had yet been derived, and it would probably be too complicated to be of any practical use.  Dale Domian and I published our quasi-real indexing (QRI) in 1995 and QRI is not that much more complicated than conventional inflation indexing.  Despite the wage indexation literature leading to these conclusions, the distinction between aggregate-demand-caused inflation and aggregate-supply-caused inflation has not been integrated into mainstream macroeconomic theory.  I hope this blog will help change that.

Conclusions:

As the wage indexation literature has realized, there are two types of inflation:  (i) aggregate-demand-caused inflation and (ii) aggregate-supply-caused inflation.  The aggregate-demand-caused inflation is bad inflation because it unnecessary imposes price-level risk on the parties of a prearranged nominal contract.  However, aggregate-supply-caused inflation is good in that that inflation is necessary for nominal contracts to naturally spread the RGDP risk between the parties of the contract.  Nominal GDP targeting tries to keep the bad aggregate-demand-caused unexpected inflation or deflation to a minimum, while letting the good aggregate-supply-caused inflation or deflation take place so that both parties in the nominal contract proportionately share in RGDP risk.  Inflation targeting (IT), price-level targeting (PLT), and conventional inflation indexing interfere with the natural ability of nominal contracts to Pareto efficiently distribute RGDP risk.  Quasi-real indexing, on the other hand, gets rid of the bad inflation while keeping the good inflation.

Note that successful price-level targeting and conventional inflation indexing basically have the same effect on the real value of loan payments.   As such, we can look at conventional inflation indexing as insurance against the central bank not meeting its price-level target.

Note that successfully NGDP targeting and quasi-real indexing have the same effect on the real value of loan payments.  As such quasi-real indexing should be looked at as being insurance against the central bank not meeting its NGDP target.

A couple of exercises some readers could do to get more familiar with the Two Fundamental Welfare Principles of Welfare Economics is to apply them to the mortgage borrowers in the U.S. and to the Greek government since the negative NGDP base drift that occurred in the U.S. and the Euro zone after 2007.  In a future blog I very likely present my own view on how these Principles apply in these cases.

References:

Arrow, K.J. (1965) “The theory of risk aversion” in Aspects of the Theory of Risk Bearing, by Yrjo Jahnssonin Saatio, Helsinki.

Bailey, Samuel (1837) “Money and Its Vicissitudes in Value” (London: Effingham Wilson).

Debreu, Gerard, (1959) “Theory of Value” (New York:  John Wiley & Sons, Inc.), Chapter 7.

Eagle, David & Dale Domian, (2005). “Quasi-Real Indexing– The Pareto-Efficient Solution to Inflation Indexing” Finance 0509017, EconWPA, http://ideas.repec.org/p/wpa/wuwpfi/0509017.html.

Eagle, David & Lars Christensen (2012). “Two Equations on the Pareto-Efficient Sharing of Real GDP Risk,” future URL: http://www.cbpa.ewu.edu/papers/Eq2RGDPrisk.pdf.

Sargent, Thomas and Neil Wallace (1975). “’Rational’ Expectations, the Optimal Monetary Instrument, and the Optimal Money Supply Rule”. Journal of Political Economy 83 (2): 241–254.

Selgin, George (2002), Less than Zero: The Case for a Falling Price Level in a Growing Economy. (London: Institute of Economic Affairs).

Koenig, Evan (2011). “Monetary Policy, Financial Stability, and the Distribution of Risk,” Federal Reserve Bank of Dallas Research Department Working Paper 1111.

Pratt, J. W., “Risk aversion in the small and in the large,” Econometrica 32, January–April 1964, 122–136.

© Copyright (2012) by David Eagle

 


[1] Technically, the Second Principle should replace “average relative risk aversion” with “average relative risk tolerance,” which is from a generalization and reinterpretation by Eagle and Christensen (2012) of the formula Koenig (2011) derived.

Guest blog: The Integral Reviews: Paper 2 – Ball (1999)

Guest Blog – The Integral Reviews: Paper 2 – Ball (1999)
By “Integral”

Reviewed: Laurence Ball (1999), “Efficient Rules for Monetary Policy.” International Finance 2(1): pp. 63–83

also featuring

Henrik Jensen (2002), “Targeting Nominal Income Growth or Inflation?” The American Economic Review 92(4): pp. 928–956.

Glenn Rudebusch (2002), “Assessing nominal income rules for monetary policy with model and data uncertainty.” The Economic Journal 112(479): pp. 402–432.

Introduction

Larry Ball’s 1999 paper makes two claims that are relevant for Market Monetarists. One is uninteresting, the second is interesting.

1. NGDP targeting is actively destabilizing
2. NGDP targeting is inferior to inflation targeting in a wide range of contexts.

The monetary economics blogosphere has analyzed the first claim to exhaustion. For a review see Adam P’s first post on the paper, replies by Scott Sumner and Bill Woolsey, Adam’s rejoinders (1,2), Adam again, and a contribution from Nick Rowe.

The result of that exchange was identical to the result of the academic response to Ball’s paper: the first claim is generally false and holds only under restrictive assumptions, but the second result is more robust and is typically left unaddressed during responses. For detailed responses to the stability claim see McCallum (1997) and Dennis (2001).

I’m going to take a stab at the second claim. Let’s start with Ball’s model.

Model

Ball sets up a simple two-equation model, though containing the essential features of the larger-scale models usually employed for policy analysis. The first equation is an IS curve that relates output to its own lag and the lagged interest rate. The second is a Phillips curve that relates inflation to its own lag and lagged output. Mathematically we have:

p(t) = p(t-1) + a*y(t-1) + n(t)
y(t) = c*y(t-1) – b*r(t-1) +  e(t)

where p is inflation, y (log) output, and r the interest rate, all measured relative to their steady-state values.

The model contains two important features: a unit root in inflation and a lag structure in which the central bank can affect output one year out but inflation only two years out. This model is trivially simple: there is no explicit accounting for private-sector expectations and there is only a single transmission mechanism of monetary policy, from the interest rate to output to inflation. The model is closed with an interest-rate rule chosen by the central bank to hit some objective.

The unit root is key for Ball’s first claim; the lag structure is key for his second claim. In a model where the interest rate affects output and inflation with a lag in the Phillips Curve, targeting nominal GDP causes the economy to cycle, hitting the NGDP target every period but doing so by causing undesirable oscillations in output and inflation. However, if one changes the Phillips curve to eliminate the lag in ouptut, nominal GDP targeting becomes an extremely attractive alternative to inflation targeting. It is difficult to prove this in closed-form so I will appeal to two recent simulation-based papers.

Assessment

Rudebusch (2002) tests the efficacy of two distinct NGDP targeting rules against a Taylor Rule. All three policy rules are evaluated relative to a social loss function which weighs the variance of output, inflation, and the nominal interest rate. Rudebusch’s model is identical to Ball’s except for adding a role for private-sector expectations. His simulation results mirror Ball’s theoretical result: for reasonable weighs on the forward-looking and backward-looking elements of the Phillips Curve, NGDP targeting severely underperforms relative to the Taylor Rule.

A second simulation is provided by Jensen (2002), whose model is identical to Rudebusch’s save for the lag structure: in Jensen’s model output, inflation and the interest rate are co-determined simultaneously. He tests five different central bank rules, each calibrated to be optimal within their own class: the fully optimal pre-commitment rule, a policy of pure discretion, inflation targeting, nominal income targeting, and a “combination” regime of targeting a weighted average of NGDP and inflation. He finds that NGDP targeting oupterforms inflation targeting in nine parameter specifications covering many economically “interesting” cases. In the simulation where supply shocks dominate, a case of much concern to Market Monetarists, NGDP targeting strongly outperforms inflation targeting and indeed comes close to mimicking the results of the fully-optimal rule.

So what is left of Ball’s claim? Rudebusch shows that NGDP targeting provides subpar performance in a model with lags in the Phillips curve. However it is equally true that NGDP targeting outperforms inflation targeting in a model without lags in the Phillips curve. The exercise provides two main results. First, the desirability of NGDP targeting is sensitive to the lag structure of a model, and of course the relevance of the lag structure remains an empirical question. This undermines NGDP targeting’s appeal as a rule which is robust to model structure. Second, the desirability of NGDP targeting is robust within the class of IS-PC models that employ a properly microfounded Phillips curve

References

Ball, Laurence. 1999. “Efficient Rules for Monetary Policy.” International Finance 2(1): pp. 63–83

Dennis, Richard. 2001. “Inflation expectations and the stability properties of nominal GDP targeting.” The Economic Journal 111(468):103–113.

Jensen, Henrik. 2002. “Targeting Nominal Income Growth or Inflation?” The American Economic Review 92(4):928–956.

McCallum, Bennett T. 1997. “The Alleged Instability of Nominal Income Targeting.” NBER Working Paper No. 6291.

Rudebusch, Glenn D. 2002. “Assessing nominal income rules for monetary policy with model and data uncertainty.” The Economic Journal 112(479): 402–432.

Two technical notes

1. Ball and Rudebusch measure society’s loss via the weighted sum of the variances of output, inflation, and the interest rate. Jensen by contrast uses a societal loss function that depends on the sum of weighted squared deviations of output and inflation from their steady-state values. Cursory inspection of Jensen’s tables shows that if one reformulates his societal loss in terms of variances, IT and NGDPT deliver outcomes which are nearly equivalent. However even if one uses variances NGDPT still weakly outperforms IT in most specifications.

2. The NGDPT and IT regimes in Jensen are themselves “mixed” regimes which put some weight on the output gap. Given that all inflation targeting in practice gives some weight to the output gap, the inclusion of such a term in both rules is innocuous.

——

See Integral’s earlier guest post: “The Integral Reviews: Paper 1 – Koenig (2011)”

Don’t forget the ”Market” in Market Monetarism

As traditional monetarists Market Monetarists see money as being at the centre of macroeconomic discussion. To us both inflation and recessions are monetary phenomena. If central banks print too much money we get inflation and if they print to little money we get recession or even depression.

This is often at the centre of the arguments made by Market Monetarists. However, we are exactly Market Monetarists because we have a broader view of monetary policy than traditional monetarists. We deeply believe in markets as the best “information system” – also about the stance of monetary policy. Even though we certainly do not disregard the value of studying monetary supply numbers we believe that the best indicator(s) of monetary policy stance is market pricing in currency markets, commodity markets, fixed income markets and equity markets. Hence, we believe in a Market Approach to monetary policy in the tradition of for example of “Manley” Johnson and Robert Keheler.

In fact we want to take out both the “central” and “banking” out of central banking and ideally replace monetary policy makers with the power of the market. Scott Sumner has suggested that the central banks should use NGDP futures in the conduct of monetary policy. In Scott’s set-up monetary policy ideally becomes “endogenous”. I on my part have suggested the use of prediction markets in the conduct of monetary policy.

Sometimes the Market Monetarist position is misunderstood to be a monetary version of (vulgar) discretionary Keynesianism. However, Market Monetarists are advocating the exact opposite thing. We strongly believe that monetary policy should be based on rules rather than discretion. Ideally we would prefer that the money supply was completely market based so that velocity would move inversely to the money supply to ensure a stable NGDP level. See my earlier post “NGDP targeting is not a Keynesian business cycle policy”

Even though Market Monetarists do not necessarily advocate Free Banking there is no doubt that Market Monetarist theory is closely related to the thinking of Free Banking theorist such as George Selgin and I have early argued that NGDP level targeting could be see as “privatisation strategy”. A less ambitious interpretation of Market Monetarism is certainly also possible, but no matter what Market Monetarists stress the importance of markets – both in analysing monetary policy and in the conduct monetary policy.

—-

See also my earlier post from today on a related topic.

Forget about the “Credit Channel”

One thing that has always frustrated me about the Austrian business cycle theory (ABCT) is that it is assumes that “new money” is injected into the economy via the banking sector and many of the results in the model is dependent this assumption. Something Ludwig von Mises by the way acknowledges openly in for example “Human Action”.

If instead it had been assumed that money is injected into economy via a “helicopter drop” directly to households and companies then the lag structure in the ABCT model completely changes (I know because I many years ago wrote my master thesis on ABCT).

In this sense the Austrians are “Creditist” exactly like Ben Bernanke.

But hold on – so are the Keynesian proponents of the liquidity trap hypothesis. Those who argue that we are in a liquidity trap argues that an increase in the money base will not increase the money supply because there is a banking crisis so banks will to hold on the extra liquidity they get from the central bank and not lend it out. I know that this is not the exactly the “correct” theoretical interpretation of the liquidity trap, but nonetheless the “popular” description of the why there is a liquidity trap (there of course is no liquidity trap).

The assumption that “new money” is injected into the economy via the banking sector (through a “Credit Channel”) hence is critical for the results in all these models and this is highly problematic for the policy recommendations from these models.

The “New Keynesian” (the vulgar sort – not people like Lars E. O. Svensson) argues that monetary policy don’t work so we need to loosen fiscal policy, while the Creditist like Bernanke says that we need to “fix” the problems in the banking sector to make monetary policy work and hence become preoccupied with banking sector rescue rather than with the expansion of the broader money supply. (“fix” in Bernanke’s thinking is something like TARP etc.). The Austrians are just preoccupied with the risk of boom-bust (could we only get that…).

What I and other Market Monetarist are arguing is that there is no liquidity trap and money can be injected into the economy in many ways. Lars E. O. Svensson of course suggested a foolproof way out of the liquidity trap and is for the central bank to engage in currency market intervention. The central bank can always increase the money supply by printing its own currency and using it to buy foreign currency.

At the core of many of today’s misunderstandings of monetary policy is that people mix up “credit” and “money” and they think that the interest rate is the price of money. Market Monetarists of course full well know that that is not the case. (See my Working Paper on the Market Monetarism for a discussion of the difference between “credit” and “money”)

As long as policy makers continue to think that the only way that money can enter into the economy is via the “credit channel” and by manipulating the price of credit (not the price of money) we will be trapped – not in a liquidity trap, but in a mental trap that hinders the right policy response to the crisis. It might therefore be beneficial that Market Monetarists other than just arguing for NGDP level targeting also explain how this practically be done in terms of policy instruments. I have for example argued that small open economies (and large open economies for that matter) could introduce “exchange rate based NGDP targeting” (a variation of Irving Fisher’s Compensated dollar plan).

Central banks should set up prediction markets

I have spend my entire career as an economist doing forecasting – both of macroeconomic numbers and of financial markets. First as a government economist and then later as a financial sector economist. I think I have done quite well, but I also know that I only rarely am able to beat the market “consensus”. If I beat the market 51% of the time then I think I am worth my money. This probably is a surprise to most none-economists, but it is common knowledge to economists that we really can’t beat the markets consistently.

My point is that the “average” forecast of the market often is a better forecast than the forecast of the individual forecaster. Furthermore, I know of no macroeconomic forecaster who has consistently over long periods been better than the “consensus” expectation. If my readers know of any such super forecaster I will be happy to know about them.

I truly believe in the wisdom of the crowd as manifested in free markets. So-called behavioural economists have another view than I have. They think that the “average” is often wrong and that different biases distort market pricing. I agree that the market is far from perfect. In fact market participants are often wrong, but they are not systematically wrong and markets tend to be unbiased. The profit motive after all is the best incentive to ensure objectivity.

Unlike the market where the profit motive rules central banks and governments are not guided by an objective profit motive but rather than by political motives – that might or might not be noble and objective.

It is well known among academic economists and market participants that the forecasts of government institutions are biased. For example Karl Brunner and Allan Meltzer have demonstrated that the IMF consistently are biased in a too optimistic direction in their forecasts.

I remember once talking to a top central banker in a Central and Eastern European central bank about forecasting. He complained to me that he frankly was tired of the research department in the central bank in which he was in the top management. The reason for his dissatisfaction was that the research department in his view was too optimistic that the central bank would be able to fulfil its inflation target in the near term. He on the other hand had the view that monetary policy needed to be tightened so the research department’s forecast was “inconvenient” for him. Said in another way he was basically unhappy that the research department was not biased enough.

Luckily that particular central bank has maintained a relatively objective and unbiased research department, but the example illustrates that central bank forecasts in no are guaranteed to be unbiased. In fact some banks are open about the fact that their forecasts are biased. Hence, today some central bank assumes in their “forecast” that their target (normally an inflation target) is reached within a given period typically in 2-3 years.

When central banks publish forecasts in which they assume the reach their targets within a given timeframe they at the same time have to say how the will be able to reach this target. This has lead some central banks to publish what is called the “interest rate path” – meaning how interest rates should be expected to be changed in the forecasting period to ensure that particular target. This is problematic in many ways. One is that it normally the research department in the central bank making the forecasts, while it is the management in the central bank (for example the FOMC in the Federal Reserve or the MPC in the Bank of England) that makes the decisions on monetary policy. Furthermore, we all know that monetary policy is exactly not about interest rates. Interest rates do not tell us much about whether monetary policy is tight or loose. Any Market Monetarists will tell you that.

Instead of relying on in-house forecasts central banks could consult the market about the outlook for the economy and markets. Scott Sumner has for example argued that monetary policy should be conducted by targeting NGDP futures. I think that is an excellent idea. However, first of all it could be hard to set-up a genuine NGDP futures markets. Second, the experience with inflation linked bonds shows that the prices on these bonds often are distorted by for example lack of liquidity in the particular markets.

I believe that these problems can be solved and I think Scott’s suggestion ideally is the right one. However, there is a more simple solution, which in principle is the same thing, but which would be much less costly and complicated to operate. My suggestion is the central bank simply set-up a prediction market for key macroeconomic variables – including of the variables that the central bank targets (or could target) such as NGDP level and growth, inflation, the price level.

So how do prediction markets work? Prediction markets are basically betting on the outcome of different events – for example presidential elections in the US or macroeconomic data.

Lets say the Federal Reserve organised a prediction market for the nominal GDP level (NGDP). It would organise “bet” on the level of NGDP for every for example for the next decade. Then market participants buy and sell the NGDP “future” for any given year and then the market pricing would tell the Fed what was the market expectation for NGDP at any given time. If market pricing of NGDP was lower than the targeted level of NGDP then monetary policy is too tight and need to be ease and if market expectation for NGDP above the targeted level then monetary policy is too loose. It really pretty simple, but I am convinced it would work.

The experience with prediction markets is quite good and prediction markets have been used to forecast everything from the outcome of elections to how much a movie will bring in at the box office. A clear advantage with prediction markets is that they are quite easy to set-up and run. Furthermore, it has been shown that even relatively small size bets give good and reliable predictions. This mean that if a central bank set up a prediction market then the average citizen in the country could easily participate in the “monetary policy market”.

I hence believe that prediction markets could be a very useful tool for central banks – both as a forecasting tool but also as a communication tool. A truly credible central bank would have no problem relying on market forecasts rather than on internal forecast.

I of course understand that central banks for all kind of reasons would be very reluctant to base monetary policy on market predictions, but imagine that the Federal Reserve had had a prediction market for NGDP (or inflation for that matter) in 2007-8. Then there is no doubt that it would have had a real-time indication of how much monetary conditions had tightened and that likely would caused the Fed into action much earlier than was actually the case. A problem with traditional macroeconomic forecasts is that they take time to do and hence are not available to policy makers before sometime has gone by.

This might all seem a little bit too farfetched but central banks already to some extent rely on market forecasts. Hence, it is normal that central banks do survey of professional forecasters and most central banks use for example futures prices to predict oil prices when they do their inflation forecasts. Using prediction markets would just take this praxis to a new level.

So I challenge central banks that want to strengthen their credibility to introduce prediction markets on key macroeconomic variables including the variables they target and to communicate clearly about the implications for monetary policy of the forecasts from these predictions markets.

—-

See my earlier comment on prediction markets and monetary policy here.

Update: If you are interested in predictions markets you should have a look at Robin Hanson’s blog Overcoming Bias and Chris Masse’s blog Midas Oracle.

Japan shows that QE works

I am getting a bit worried – it has happened again! I agree with Paul Krugman about something or rather this time around it is actually Krugman that agrees with me.

In a couple of posts (see here and here) I have argued that the Japanese deflation story is more complicated than both economists and journalists often assume.

In my latest post (“Did Japan have a productivity norm?”I argued that the deflation over the past decade has been less harmful than the deflation of the 1990s. The reason is that the deflation of the 2000s (prior to 2008) primarily was a result of positive supply shocks, while the deflation of in 1990s primarily was a result of much more damaging demand deflation. I based this conclusion on my decomposition of inflation (or rather deflation) on my Quasi-Real Price Index.

Here is Krugman:

“A number of readers have asked me for an evaluation of Eamonn Fingleton’s article about Japan. Is Japan doing as well as he says?

Well, no — but his point about the overstatement of Japan’s decline is right…

…The real Japan issue is that a lot of its slow growth has to do with demography. According to OECD numbers, in 1990 there were 86 million Japanese between the ages of 15 and 64; by 2007, that was down to 83 million. Meanwhile, the US working-age population rose from 164 million to 202 million.”

This is exactly my view. In terms of GDP per capita growth Japan has basically done as good (or maybe rather as badly) other large industrialised countries such as Germany and the US.

This is pretty simple to illustrate with a graph GDP/capita for the G7 countries since 1980 (Index 2001=100).

(UPDATE: JP Koning has a related graph here)

A clear picture emerges. Japan was a star performer in 1980s. The 1990s clearly was a lost decade, while Japan in the past decade has performed more or less in line with the other G7 countries. In fact there is only one G7 country with a “lost decade” over the paste 10 years and that is Italy.

Quantitative easing ended Japan’s lost decade

Milton Friedman famously blamed the Bank of Japan for the lost decade in 1990s and as my previous post on Japan demonstrated there is no doubt at all that monetary policy was highly deflationary in 1990s and that undoubtedly is the key reason for Japan’s lost decade (See my graph from the previous post).

In 1998 Milton Friedman argued that Japan could pull out of the crisis and deflation by easing monetary policy by expanding the money supply – that is what we today call Quantitative Easing (QE).

Here is Friedman:

“The surest road to a healthy economic recovery is to increase the rate of monetary growth, to shift from tight money to easier money, to a rate of monetary growth closer to that which prevailed in the golden 1980s but without again overdoing it. That would make much-needed financial and economic reforms far easier to achieve.

Defenders of the Bank of Japan will say, “How? The bank has already cut its discount rate to 0.5 percent. What more can it do to increase the quantity of money?”

The answer is straightforward: The Bank of Japan can buy government bonds on the open market, paying for them with either currency or deposits at the Bank of Japan, what economists call high-powered money. Most of the proceeds will end up in commercial banks, adding to their reserves and enabling them to expand their liabilities by loans and open market purchases. But whether they do so or not, the money supply will increase.

There is no limit to the extent to which the Bank of Japan can increase the money supply if it wishes to do so. Higher monetary growth will have the same effect as always. After a year or so, the economy will expand more rapidly; output will grow, and after another delay, inflation will increase moderately. A return to the conditions of the late 1980s would rejuvenate Japan and help shore up the rest of Asia.”

(Yes, it sounds an awful lot like Scott Sumner…or rather Scott learned from Friedman)

In early 2001 the Bank of Japan finally decided to listen to the advise of Milton Friedman and as the graph clearly shows this is when Japan started to emerge from the lost decade and when real GDP/capita started to grow in line with the other G7 (well, Italy was falling behind…).

The actions of the Bank of Japan after 2001 are certainly not perfect and one can clearly question how the BoJ implemented QE, but I think it is pretty clearly that even BoJ’s half-hearted monetary easing did the job and pull Japan out of the depression. In that regard it should be noted that headline inflation remained negative after 2001, but as I have shown in my previous post Bank of Japan managed to end demand deflation (while supply deflation persisted).

And yes, yes the Bank of Japan of course should have introduces much clearer nominal target (preferably a NGDP level target) and yes Japan has once again gone back to demand deflation after the Bank of Japan ended QE in 2007. But that does not change that the little the BoJ actually did was enough to get Japan growing again.

The “New Normal” is a monetary – not a real – phenomenon

I think a very important conclusion can be drawn from the Japanese experience. There is no such thing as the “New Normal” where deleveraging necessitates decades of no growth. Japan only had one and not two lost decades. Once the BoJ acted to end demand deflation the economy recovered.

Unfortunately the Bank of Japan seems to have moved back to the sins of 1990s – as have the Federal Reserve and the ECB. We can avoid a global lost decade if these central banks learn the lesson from Japan – both the good and the bad.

HT JP Koning