A Taylor rule is a monetary-policy rule that stipulates how much the central bank should change the nominal interest rate in response to divergences of actual GDP from potential GDP and of actual inflation rates from a target inflation rates. It was first proposed by the by U.S. economist John B. Taylor in 1993.[1] The rule can be written as follows:
In this equation, it is the target short-term nominal interest rate (e.g. the federal funds rate in the US), πt is the rate of inflation as measured by the GDP deflator, is the desired rate of inflation, is the assumed equilibrium real interest rate, yt is the logarithm of real GDP, and is the logarithm of potential output, as determined by a linear trend. A possible advantage of such a rule is in avoiding the inefficiencies of time inconsistency from the exercise of discretionary policy.[2][3]
According to the rule, both aπ and ay should be positive (as a rough rule of thumb, Taylor's 1993 paper proposed setting aπ = ay = 0.5). That is, the rule "recommends" a relatively high interest rate (a "tight" monetary policy) when inflation is above its target or when the economy is above its full employment level, and a relatively low interest rate ("easy" monetary policy) in the opposite situations.
Sometimes monetary policy goals may conflict, as in the case of stagflation, when inflation is above its target while the economy is below full employment. In such a situation, the rule offers guidance on how to balance these competing considerations in setting an appropriate level for the interest rate. In particular, by specifying aπ > 0, the Taylor rule says that the central bank should raise the nominal interest rate by more than one percentage point for each percentage point increase in inflation. In other words, since the real interest rate is (approximately) the nominal interest rate minus inflation, stipulating aπ > 0 is equivalent to saying that when inflation rises, the real interest rate should be increased.
Although the Fed does not explicitly follow the rule, many analyses show that the rule does a fairly accurate job of describing how US monetary policy actually has been conducted during the past decade under Alan Greenspan.[4][5] Similar observations have been made about central banks in other developed economies, both in countries like Canada and New Zealand that have officially adopted inflation targeting rules, and in others like Germany where the central bank's policy did not officially target the inflation rate.[6][7] This observation has been cited by many economists as a reason why inflation has remained under control and the economy has been relatively stable in most developed countries since the 1980s.
During an Econtalk podcast Taylor explained the rule in simple terms using three variables: inflation rate, GDP growth, and the interest rate. If inflation were to rise by 1%, the proper response would be to raise the interest rate by 1.5% (Taylor explains that it doesn't always need to be exactly 1.5%, but being larger than 1% is essential). If GDP falls by 1% relative to its growth path, than the proper response is to cut the interest rate by .5%.[8]
Orphanides (2003) claims that the Taylor rule can misguide policy makers since they face real time data. He shows that the Taylor rule matches the US funds rate less perfectly when accounting for these informational limitations and that an activist policy following the Taylor rule would have resulted in an inferior macroeconomic performance during the Great Inflation of the seventies.[9]