In economics, a Taylor rule is a reduced form approximation of the responsiveness of the nominal interest rate, as set by the central bank, to changes in inflation, output, or other economic conditions. In particular, the rule describes how, for each one-percent increase in inflation, the central bank tends to raise the nominal interest rate by more than one percentage point. This aspect of the rule is often called the Taylor principle. It should be noted that while such rules may serve as concise, descriptive proxies for central bank policy, and are not explicitly proscriptively considered by central banks when setting nominal rates.
As an equation
According to Taylor’s original version of the rule, the nominal interest rate should respond to divergences of actual inflation rates from target inflation rates and of actual Gross Domestic Product (GDP) from potential GDP:
In this equation, is the target short-term nominal interest rate (e.g. the federal funds rate in the US, the Bank of England base rate in the UK), is the rate of inflation as measured by the GDP deflator, is the desired rate of inflation, is the assumed equilibrium real interest rate, is the logarithm of real GDP, and is the logarithm of potential output, as determined by a linear trend.
In this equation, both and should be positive (as a rough rule of thumb, Taylor’s 1993 paper proposed setting (). That is, the rule “recommends” a relatively high interest rate (a “tight” monetary policy) when inflation is above its target or when output is above its full-employment level, in order to reduce inflationary pressure. It recommends a relatively low interest rate (“easy” monetary policy) in the opposite situation, to stimulate output. Sometimes monetary policy goals may conflict, as in the case of stagflation, when inflation is above its target while output is below full employment. In such a situation, a Taylor rule specifies the relative weights given to reducing inflation versus increasing output.