**BIS Working Papers **

**No 700**

**The perils of approximating fixed- horizon inflation forecasts with fixed- event forecasts**

by James Yetman

**Monetary and Economic Department **

February 2018

JEL classification: E31, E58

Keywords: deflation; inflation expectations; forecast disagreement; monetary policy

This publication is available on the BIS website (www.bis.org).

*© Bank for International Settlements 2017. All rights reserved. Brief excerpts may be reproduced or translated provided the source is stated.*

ISSN 1020-0959 (print)

ISSN 1682-7678 (online)

**The perils of approximating fixed-horizon inflation forecasts with fixed-event forecasts **

*February 2018 *

*James Yetman*

**Abstract **

A common practice in studies using inflation forecasts is to approximate fixed-horizon forecasts with fixed-event ones. Here we show that this may be problematic. In a panel of US inflation forecast data that allows us to compare the two, the approximation results in a mean absolute approximation error of around 0.2–0.3 percentage points (around 10% of the level of inflation), and statistically significant differences in both the variances and persistence of the approximate inflation forecasts relative to the actual forecasts. To reduce these problems, we propose an adjustment to the approximation, consistent with a model where longer-horizon forecasts are more heavily “anchored”, while shorter-horizon forecasts more closely reflect current inflation levels.

Keywords: fixed-event forecasts, fixed-horizon forecasts, inflation expectations

JEL classification: C43, E31

[email protected] Bank for International Settlements, Representative Office for Asia and the Pacific, 78th Floor, Two IFC, 8 Finance Street, Central, Hong Kong SAR. The views expressed in this paper are those of the author and do not necessarily reflect those of the Bank for International Settlements. I thank, without implication, Andy Filardo, Güneş Kamber, Aaron Mehrotra, Frank Packer, Pierre Siklos and seminar participants at the BIS (Hong Kong and Basel) for helpful comments.

**1. Introduction**

Macroeconomic forecasts can be characterised as either "fixed event" or "fixed horizon". With fixed- event forecasts, the outcome being forecast remains fixed while the forecast horizon changes from one forecast to the next - eg forecasts of 2017 inflation made in consecutive months. By contrast, with fixed-horizon forecasts, the timing of the outcome and the forecast move together, so that the period between them remains constant - eg forecasts of inflation over the coming **12** months made at different points in time.

Fixed-event and fixed-horizon forecasts are each better suited for different purposes. Fixed-event forecasts provide an effective means to assess how expectations are changing in response to developments: since the outcome being forecast is unchanged from one forecast to the next, forecast changes must be due to changed expectations of the outcome. However, because the forecast horizon varies from one forecast to the next, other properties of the forecasts (such as the degree of forecaster disagreement, or the persistence of forecasts) may not be readily comparable.

In contrast, fixed-horizon forecasts are generally preferable when using forecasts as a proxy for expectations in estimation exercises. For example, when estimating a forward-looking Phillip's curve**, ^{2} **there is presumably an optimal forecast horizon for expected inflation for which the model will fit best. Using fixed-event forecasts in this context without adequately controlling for the varying horizons would compromise econometrics performance. Perhaps not surprisingly, when central banks collect inflation forecasts, they generally focus on fixed-horizon forecasts

Given the relatively limited availability of high-quality surveys of fixed-horizon forecasts, especially outside some major advanced economies, researchers have employed several different approaches to use fixed-event forecasts in their place:

- Group forecasts by horizons and treat those in each group as having the same properties - for example, grouping all forecasts for "next year" versus "this year" as in Johnson (2002). Along similar lines, Capistran and Ramos-Francia (2010) group all forecasts made in a given year in their study of forecast dispersion and, apart from adding fixed effects for each month, assume that the data-generating process is the same across horizons. Especially at shorter horizons, this is likely to be problematic. Forecasts made in January for the outcome in that calendar year are likely to be very different from forecasts made in December for the same year, given that most of the information for the current year are likely to be known by the forecaster at the time of the shortest-horizon forecasts.

- Model the evolution of the forecasts across the different horizons explicitly. An early paper taking this approach was Davies and Lahiri (1995), who model forecast errors as being made up of bias, monthly shocks and idiosyncratic errors, where the monthly shocks accumulate across horizons and the distribution of the errors depends on the forecast horizon. Another approach is to allow forecasts to be more heavily influenced by a long- run anchor of inflation expectations, or endpoint, the longer is the forecast horizon (eg Kozicki and Tinsley (2012), Mehrotra and Yetman (2014) and Yetman (2017)). Modelling the evolution of forecasts across horizons may be computationally demanding compared with other options, however.
- Approximate fixed-horizon forecasts based on fixed-event forecasts. In this category, see Dovern et al (2012) and Siklos (2013, 2017), in studies of forecast disagreement; Johnson (2003), in a study of the effect of inflation targeting on the level of expected inflation; Gerlach (2007), who uses the approximation to construct a measure of inflation expectations when estimating the monetary policy reaction function of the European Central Bank; Kortelainen et al (2011), who generate approximations of expected inflation and GDP to use in estimating a three-equation New Keynesian model; and Davis (2014), who assesses the responses of inflation expectations to various shocks. Some authors cite Dovern and Fritsche (2008) in defence of this approach. They compare approximate fixed- horizon forecasts, constructed from fixed-event forecasts, with actual fixed-horizon forecasts constructed from quarterly forecasts made by the same panel of forecasts. They argue that the approximation results in levels of dispersion across forecasters that are highly correlated with the true level of dispersion.

This third approach involves an approximation error. One objective of this paper is to assess how large the approximation error is likely to be, and what it implies for other properties of the forecasts. This is generally difficult to determine because, while there are panels of fixed-horizon and fixed-event forecasts for the same variables, they generally differ along many dimensions (eg the timing of the forecasts and the composition of the forecaster panel) such that they cannot be readily compared.

However, there is one panel of forecaster-level inflation forecasts that we can use to construct both fixed-event and fixed-horizon forecasts from the same underlying data. This data set is the "Survey of Professional Forecasters" (SPF), collated by the Federal Reserve Bank of Philadelphia. It is the same data

set as Dovern and Fritsche (2008) explored**. ^{4}** The quarterly SPF includes forecasts of inflation for six quarters (the previous quarter, the current quarter and each of the following four quarters) as well as up to three calendar years (defined as the fourth-quarter-over-fourth-quarter growth rate; for the current year as well as the following two years

Using data beginning in 1990, we find that the approximation introduces distortions into the inflation measures that are often quantitatively large and statistically significant. The mean absolute approximation error is around 0.2-0.3 percentage points, or around 10% of the level of inflation. The approximation error also leads to statistically significant changes in the variance of the resulting inflation measures. Further, the persistence in the annual inflation forecast series is often statistically significantly different when the approximation is used, a problem that appears to be worse in the second half of our sample period than in the first.

We illustrate a possible partial fix for the approximation bias, drawing on the idea that inflation forecasts are more heavily "anchored" as the forecast horizon increases, as discussed in Kozicki and Tinsley (2012), Mehrotra and Yetman (2014) and Yetman (2017). The "fix" reduces the mean absolute error by 13% (and the mean squared error by 21%), and results in fewer significant differences between the actual and approximate inflation rates in terms of their variances and persistence as well. Further, higher order moments - skewness and kurtosis - of the resulting panel of approximate forecasts are more alike the actual data when our adjustment to the standard approximation is implemented.

Additionally, it would be possible to examine the effect of using the approximation using data from Consensus Economics, but based on average forecasts across a panel. For the 27 economies included in their "Consensus Forecasts" or "Asia- Pacific Consensus Forecasts" publications, average quarterly inflation forecasts are published for each of the following seven to eight quarters on a quarterly basis. Under the assumption that the composition of the annual fixed-event and quarterly forecasts are the same, the quarterly forecasts could be used to compute actual average 12-month inflation forecasts which should be comparable with approximate forecasts based on average fixed-event annual forecasts. In their eastern European and Latin American publications, Consensus Economics includes average forecasts of monthly inflation for each of the following six months, so no such comparison with annual inflation is possible. However, in no cases could published Consensus Economics data be used to examine the effects of the approximation at the level of the individual forecaster, unlike the SPF forecast data examined here.

**2. Approximating 12-month inflation forecasts**

When approximating fixed-horizon forecasts with fixed-event ones, the most common approach is as follows. Let ** n_{t+}^** denote the forecast of annual inflation for the year ending

Fixed 12-month horizon forecasts are then approximated in the studies listed in the introduction as averages of the forecasts for the current and next calendar years, weighted by their shares in the forecast period:

This approach implies that the 12-month-ahead forecast for inflation made in October 2012 is approximated by the sum for example.

**3. Calculating actual and approximate forecasts**

The SPF provides us with quarterly and annual inflation forecasts at a quarterly frequency that we can use to construct both actual and approximate 12-month inflation forecasts for each quarter. For example, in the first quarter of each year, the available forecasts include the following:

Combining all horizons, then, we have three observations in each year for each forecaster where the annual forecasts and the quarterly forecasts should be consistent with each other (based on equations 2, 4 and 10). In addition, we have eight observations in each year for each forecaster where we can compute both approximate and actual 12-month inflation to compare (based on equations 3, 5-9 and 11-12).

In our analysis of the SPF forecasts, we focus on the Q1 1990-Q1 2017 period. We avoid earlier forecasts, given that the behaviour of inflation forecasts during relatively high inflation periods may be a poor guide to forecast performance today**. ^{8}** We construct the measures outlined above for each forecaster, in each period. We examine all observations for which both comparable measures are available.

Graph 1 displays the three tests of consistency outlined above (in equations 2, 4 and 10). If the annual and quarterly forecasts were perfectly consistent, then we would expect them to lie on a 45- degree line drawn through the origin. Even allowing for rounding error (given that the forecasts are typically given as percentages to one decimal place), there are clearly some inconsistencies: in 4.5% of cases, the difference exceeds 0.4 percentage points, and in 1.4% of cases 1.0 percentage points. In the following analysis, we will consider dropping all forecasts where the inconsistency is large as a robustness check.

Our next step is to compare the actual and approximate 12-month inflation forecasts under each of the cases outlined above, and to highlight the differences between then. Some key results are summarised in Table 1 below, both for the full sample and dropping those forecasts where quarterly and annual inflation forecasts are inconsistent**. ^{9}** Shaded rows represent results where there is no approximation involved, and any difference is due to measurement or rounding error, or underlying inconsistency between the quarterly and annual forecasts. We use these as a benchmark to indicate the level of variation that attributable to random noise, as opposed to approximation error.

Column (3) contains the mean absolute difference between the approximation and the actual 12- month inflation forecasts. For (shaded) quarters in which the only source of difference is measurement and rounding error, these differences are less than 0.1 percentage points (and around 0.03 percentage points once we drop observations with larger inconsistencies from the sample). By contrast, these average differences are three to 12 times as large for other quarters, or around 0.2-0.3 percentage points, which is approximately 10% of the average inflation rate during the sample period. This increase in average error is highly statistically significant (p-value < 0.01 in all cases). The results in column (4) illustrate the same message based on mean squared differences.

Columns (5-7) display the variance of forecasts across the two measures of annual inflation, and the p-value from an F-test of equal variance. For (shaded) cases, in quarters where the only difference is rounding and measurement error, this difference is small and never statistically significant. However, for other cases, the difference is often statistically highly significant, with p-values smaller than 0.05 in nine out of the 16 cases examined. Note, however, that there is no simple pattern of differences: in some cases the approximation results in a significantly larger variance, and in others in a significantly smaller variance.

Columns (8-9) illustrate differences in the degree of skewness across the two measures, and the final two columns display kurtosis. Here there is a pattern: kurtosis is nearly always greater in the 12- month forecasts constructed from quarterly forecasts than in the approximate measure based on annual forecasts.

We also illustrate the differences for one particular case, corresponding to the third row of results in the table, visually in Graph 2. For these forecasts, of inflation for the year ending in the fourth quarter that were made in the second quarter, we take the distribution of forecasts for each year and calculate the median (solid line) and quartiles (dotted lines) of the approximate (red) and actual (blue) inflation forecasts separately. The graph shows that the two distributions are very different at times. For example, for forecasts of **2001** inflation, the median approximate forecast is **0.6** percentage points higher than the actual forecast, and there is no overlap between the interquartile ranges of the two series. Similar sized gaps also exist in 2007-09, with the approximate forecasts much higher than the actual for the first two of those years, and then much lower for the third. Given that these years with the largest gaps all correspond to periods where the United States was in recession, it appears that the approximation is especially poor during economic downturns.

Taking these results together, the approximation results in a non-trivial amount of noise in the 12- month inflation forecasts, relative to forecasts constructed from quarterly data where no approximation is involved. In addition, it often results in a statistically significant change in the variance of the distribution and reduces the thickness of the distribution's tails. Given that the variance and kurtosis are potential measures of forecaster disagreement, which in turn is commonly interpreted as a measure of central bank credibility (eg Dovern et al (2012); Siklos (2013, 2017)), the approximation may lead to biased empirical analysis.

Another aspect of the forecasts that we assess is the degree of persistence. In particular, we estimate:

on a panel where forecasters are indexed by ** i,** and compare the estimates of

The results are contained in Table 2. For the quarters where the differences are only due to measurement and rounding error (the shaded rows), there is generally no statistically significant difference in the degree of persistence, especially once we remove observations in years with inconsistent forecasts. However, in other quarters, the differences are statistically significant in about half of all cases. There is no clear pattern to the differences: in some cases, the persistence is greater in the approximate forecasts, while in other cases it is smaller.

High levels of persistence in forecasts are commonly interpreted as indicators of poorly anchored expectations, since they imply that any departure from an inflation target is likely to self-perpetuate. Studies of central bank credibility sometimes focus on changes in the persistence of inflation forecasts over time (eg Corbo et al (2001)). Even more important than any bias in the level of persistence, therefore, would be changes in the amount of bias over time. To check for this, we divide the sample approximately in half and estimate the above equation on each subsample. Graph 3 displays the point estimates below.

The magnitude of this difference (as much as 0.3) is roughly the same magnitude as the underlying persistence in the forecasts themselves, suggesting that any studies of changing forecast persistence based on approximate data may be misleading.

**5. Improving on the approximation**

We have highlighted some problems with approximating fixed-horizon inflation forecasts using fixed- event forecasts. Now we offer a partial fix that improves on the standard approximation in terms of better matching the statistical properties of the actual forecast data.

Kozicki and Tinsley (2012), Mehrotra and Yetman (2014) and Yetman (2017) have suggested models of inflation forecasts where these are increasingly "anchored" as the forecast horizon increases. One formulation of this is the following:

where ** 7i*** is the inflation anchor and

Note that for ** k** _ 12,

To implement this adjustment to the approximation, we focus on the 65 forecasters who provided forecasts to the SPF at least 20 times within our sample. While these represent only 37% of the 174 forecasters in the SPF panel, their forecasts constitute 79% of the total forecast panel, as many of the other forecasters only enter a very limited number of times**.** We then estimate equation (14) for each forecaster, obtaining estimates of ** b** that range from 1.4 and 76.8.

The results for the first and second moments are given in Table 3, and for higher moments and persistence in Table 4. In comparing the adjustment with the standard approximation, we focus only on the unshaded portions of the table- ie the elements where (i) there is an approximation error inherent in the data and (ii) for ** b** >

The remainder of Table 3 compares the performance of the variance across the panel of forecasters. Once again, our adjusted approximation generally outperforms the standard approximation: in six of the eight cases, the variance is closer to the actual variance using our adjustment in place of the standard approximation, and the average decline in this gap across all eight cases is by **22**%. Moreover, the difference between the variances is statistically significant in only two cases (at the 95% level) using our adjustment, versus four cases using the standard approximation.

Moving on to higher moments, here improvements are also evident (Table 4). The gap between the skewness of the distribution of approximate and actual forecasts (columns 3-6) declines using the adjusted approximation in five of the eight cases, and by an average of 9% overall. Moreover, the gap in the case of kurtosis (columns 7-10) decreases in all but one case, by an overall average of 23%.

Finally, we compare the estimated persistence, in a manner analogous to the discussion of Table 2. Here the results are more modest. The gap between the persistence of the approximate inflation forecast and the actual inflation forecast declines by 5% with our adjustment. Further, the difference in persistence between the approximation and the actual data is statistically significant (at the 95% level) in three cases with the adjustment versus five cases without it. However, the level of the gap actually increases in more cases (five) than it decreases (three).

Taken together, our results indicate that our proposed adjustment improves the performance of a standard approximation that researchers have used to convert fixed-event inflation forecasts into fixed- horizon forecasts.

We can explain the effectiveness of our adjustment in the following way. Equation (17) implies that the weight on the inflation anchor varies with the forecast horizon in the manner displayed in Graph 4. Suppose that we wish to have a 12-month fixed-horizon forecast, but only have annual inflation forecasts, for two calendar years that each partially overlap with the **12**-month horizon, indicated by ** f** and

function is concave, this lies below

The effect of our adjustment is to use the estimate of the curvature of function (based on ** b**) to move the weight on the anchor along the chord on the graph from point

Note, however, that this is not a complete fix for the problem of using the approximation, especially in terms of estimates of the persistence of inflation expectations. Clearly, our simple model of forecasts underlying the adjustment to the approximation is incomplete, and so our correction is only partial. One possibility for further work would be to model the error process in the expectations formation process, which might provide scope for further improvements. Additionally, our correction may be less effective if expectations are poorly anchored. In our model, that would be characterised by a low value of ** b** and less curvature in the function in Graph 3, so that the gap between the standard approximation (point

In this paper, we have shown that approximating fixed-horizon forecasts using fixed-event forecasts is problematic. In the SPF data that we examined, it results in a mean absolute approximation error of around 0.2-0.3 percentage points, or about 10% of the level of inflation. In addition, it results in statistically significant changes in both the variances and persistence of the resulting inflation forecast measures.

So what should researchers who desire a fixed-horizon forecast, but only have fixed-event forecasts available, do? We have shown one path to improve on the performance of the approximation, based on a model where inflation forecasts are a weighted average of an anchor and the latest available inflation forecast at the time that the forecasts are made, with the weight on the anchor declining as the forecast horizon shortens. Our model implies placing a relatively higher weight on the longer- horizon forecast than is the case using the standard approximation. Our proposed correction reduces the mean absolute error and means squared error of the approximation (by 13% and 21% respectively).

It also reduces the number of statistically significant differences between the approximate and actual inflation forecasts in terms of the variance and persistence. Further, the skewness and kurtosis of the distribution of adjusted approximate forecasts is more like that for the distribution of actual forecasts than when the standard approximation is used.

**References**

Capistran, C and M Ramos-Francia (2010): "Does inflation targeting affect the dispersion of inflation expectations?" ** Journal of Money, Credit and Banking** 42(1), 113-134.

Corbo, V, O Landerretche and K Schmitt-Hebbel (2001): "Assessing inflation targeting after a decade of world experience," ** International Journal of Financial Economics** 6(4), 343-368.

Davies, A and K Lahiri (1995): "A new framework for analyzing survey forecasts using three-dimensional panel data," ** Journal of Econometrics** 68(1), 205-227.

Davis, J (2014): "Inflation targeting and the anchoring of inflation expectations: cross-country evidence from consensus forecasts," ** Federal Reserve Bank of Dallas Globalization and Monetary Policy Institute Working Paper** no 174.

Dovern, J and U Fritsche (2008): "Estimating fundamental cross-section dispersion form fixed event forecasts," ** University of Hamburg Discussion Paper** no 1.

Dovern, J, U Fritsche and J Slacalek (2012): "Disagreement among forecasters in G7 countries," ** Review of Economics and Statistics** 94(4), 1081-1096.

Gerlach, S (2007): "Interest rate setting by the ECB, 1999-2006: Words and deeds," ** International Journal of Central Banking** 3(3), 1-45.

Heenan, G, M Peter and S Roger (2006): "Implementing inflation targeting: institutional arrangements, target design, and communications," ** IMF Working Paper** no 06/278.

Johnson, D (2002): "The effect of inflation targeting on the behavior of expected inflation: evidence from an 11 country panel," ** Journal of Monetary Economics** 49(8), 1521-1538.

Johnson, D (2003): "The effect of inflation targets on the level of expected inflation in five countries," ** Review of Economics and Statistics** 85(4), 1076-1081.

Kortelainen, M, M Paloviita and M Viren (2011): "Observed inflation forecasts and the New Keynesian macro model," ** Economics Letters** 112(1), 88-90.

Kozicki, S and P Tinsley (2012): "Effective use of survey inflation in estimating the evolution of expected inflation," ** Journal of Money, Credit and Banking** 44(1), 145-169.

Lees, K (2016): "Assessing forecast performance," ** Bulletin** 79(10), Reserve Bank of New Zealand, June.

Mavroeidis, S, M Plagborg-M**0**ller and J Stock (2014): "Empirical evidence on inflation expectations in the New Keynesian Phillips Curve," ** Journal of Economic Literature** 52(1), 124-188.

Mehrotra, A and J Yetman (2014): "Decaying expectations: what inflation forecasts tell us about the anchoring of inflation expectations," ** BIS Working Papers** no 464.

Moreno, R and A Villar (2010): "Inflation expectations, persistence and monetary policy", ** BIS Papers **no 49, 77-92.

Patton, A and A Timmermann (2010): "Why do forecasters disagree? Lessons from the term structure of cross-sectional dispersion," ** Journal of Monetary Economics** 57(7), 803-820.

Siklos, P (2017): "What has publishing central bank forecasts accomplished? Central banks and Their competitors," Handbook of Central Banking, D Mayes, P Siklos and J Sturm (eds), Oxford University Press, forthcoming.

Siklos, P (2013): "Sources of disagreement in inflation forecasts: an international empirical investigation," ** Journal of International Economics** 90(1), 218-231.

Sousa, R and J Yetman (2016): "Inflation expectations and monetary policy," ** BIS Papers** no 89, 41-67.

Yetman, J (2017): "The evolution of inflation expectations in Canada and the US," Canadian Journal of Economics 50(3), 711-737.