Tutorial

Let us consider chapter 7 of the excellent treatise on the subject of Exponential Smoothing By Hyndman and Athanasopoulos [1]. We will work through all the examples in the chapter as they unfold.

[1] [Hyndman, Rob J., and George Athanasopoulos. Forecasting: principles and practice. OTexts, 2014.](https://www.otexts.org/fpp/7)

Exponential smoothing

First we load some data. We have included the R data in the notebook for expedience.

[1]:
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from statsmodels.tsa.api import ExponentialSmoothing, SimpleExpSmoothing, Holt
%matplotlib inline

data = [446.6565,  454.4733,  455.663 ,  423.6322,  456.2713,  440.5881, 425.3325,  485.1494,  506.0482,  526.792 ,  514.2689,  494.211 ]
index= pd.date_range(start='1996', end='2008', freq='A')
oildata = pd.Series(data, index)

data = [17.5534,  21.86  ,  23.8866,  26.9293,  26.8885,  28.8314, 30.0751,  30.9535,  30.1857,  31.5797,  32.5776,  33.4774, 39.0216,  41.3864,  41.5966]
index= pd.date_range(start='1990', end='2005', freq='A')
air = pd.Series(data, index)

data = [263.9177,  268.3072,  260.6626,  266.6394,  277.5158,  283.834 , 290.309 ,  292.4742,  300.8307,  309.2867,  318.3311,  329.3724, 338.884 ,  339.2441,  328.6006,  314.2554,  314.4597,  321.4138, 329.7893,  346.3852,  352.2979,  348.3705,  417.5629,  417.1236, 417.7495,  412.2339,  411.9468,  394.6971,  401.4993,  408.2705, 414.2428]
index= pd.date_range(start='1970', end='2001', freq='A')
livestock2 = pd.Series(data, index)

data = [407.9979 ,  403.4608,  413.8249,  428.105 ,  445.3387,  452.9942, 455.7402]
index= pd.date_range(start='2001', end='2008', freq='A')
livestock3 = pd.Series(data, index)

data = [41.7275,  24.0418,  32.3281,  37.3287,  46.2132,  29.3463, 36.4829,  42.9777,  48.9015,  31.1802,  37.7179,  40.4202, 51.2069,  31.8872,  40.9783,  43.7725,  55.5586,  33.8509, 42.0764,  45.6423,  59.7668,  35.1919,  44.3197,  47.9137]
index= pd.date_range(start='2005', end='2010-Q4', freq='QS-OCT')
aust = pd.Series(data, index)

Simple Exponential Smoothing

Lets use Simple Exponential Smoothing to forecast the below oil data.

[2]:
ax=oildata.plot()
ax.set_xlabel("Year")
ax.set_ylabel("Oil (millions of tonnes)")
print("Figure 7.1: Oil production in Saudi Arabia from 1996 to 2007.")
Figure 7.1: Oil production in Saudi Arabia from 1996 to 2007.
../../../_images/examples_notebooks_generated_exponential_smoothing_4_1.png

Here we run three variants of simple exponential smoothing: 1. In fit1 we do not use the auto optimization but instead choose to explicitly provide the model with the \(\alpha=0.2\) parameter 2. In fit2 as above we choose an \(\alpha=0.6\) 3. In fit3 we allow statsmodels to automatically find an optimized \(\alpha\) value for us. This is the recommended approach.

[3]:
fit1 = SimpleExpSmoothing(oildata).fit(smoothing_level=0.2,optimized=False)
fcast1 = fit1.forecast(3).rename(r'$\alpha=0.2$')
fit2 = SimpleExpSmoothing(oildata).fit(smoothing_level=0.6,optimized=False)
fcast2 = fit2.forecast(3).rename(r'$\alpha=0.6$')
fit3 = SimpleExpSmoothing(oildata).fit()
fcast3 = fit3.forecast(3).rename(r'$\alpha=%s$'%fit3.model.params['smoothing_level'])

plt.figure(figsize=(12, 8))
plt.plot(oildata, marker='o', color='black')
plt.plot(fit1.fittedvalues, marker='o', color='blue')
line1, = plt.plot(fcast1, marker='o', color='blue')
plt.plot(fit2.fittedvalues, marker='o', color='red')
line2, = plt.plot(fcast2, marker='o', color='red')
plt.plot(fit3.fittedvalues, marker='o', color='green')
line3, = plt.plot(fcast3, marker='o', color='green')
plt.legend([line1, line2, line3], [fcast1.name, fcast2.name, fcast3.name])
[3]:
<matplotlib.legend.Legend at 0x7f6fb7e1ebd0>
../../../_images/examples_notebooks_generated_exponential_smoothing_6_1.png

Holt’s Method

Lets take a look at another example. This time we use air pollution data and the Holt’s Method. We will fit three examples again. 1. In fit1 we again choose not to use the optimizer and provide explicit values for \(\alpha=0.8\) and \(\beta=0.2\) 2. In fit2 we do the same as in fit1 but choose to use an exponential model rather than a Holt’s additive model. 3. In fit3 we used a damped versions of the Holt’s additive model but allow the dampening parameter \(\phi\) to be optimized while fixing the values for \(\alpha=0.8\) and \(\beta=0.2\)

[4]:
fit1 = Holt(air).fit(smoothing_level=0.8, smoothing_slope=0.2, optimized=False)
fcast1 = fit1.forecast(5).rename("Holt's linear trend")
fit2 = Holt(air, exponential=True).fit(smoothing_level=0.8, smoothing_slope=0.2, optimized=False)
fcast2 = fit2.forecast(5).rename("Exponential trend")
fit3 = Holt(air, damped=True).fit(smoothing_level=0.8, smoothing_slope=0.2)
fcast3 = fit3.forecast(5).rename("Additive damped trend")

plt.figure(figsize=(12, 8))
plt.plot(air, marker='o', color='black')
plt.plot(fit1.fittedvalues, color='blue')
line1, = plt.plot(fcast1, marker='o', color='blue')
plt.plot(fit2.fittedvalues, color='red')
line2, = plt.plot(fcast2, marker='o', color='red')
plt.plot(fit3.fittedvalues, color='green')
line3, = plt.plot(fcast3, marker='o', color='green')
plt.legend([line1, line2, line3], [fcast1.name, fcast2.name, fcast3.name])
[4]:
<matplotlib.legend.Legend at 0x7f6fb7de6790>
../../../_images/examples_notebooks_generated_exponential_smoothing_8_1.png

Seasonally adjusted data

Lets look at some seasonally adjusted livestock data. We fit five Holt’s models. The below table allows us to compare results when we use exponential versus additive and damped versus non-damped.

Note: fit4 does not allow the parameter \(\phi\) to be optimized by providing a fixed value of \(\phi=0.98\)

[5]:
fit1 = SimpleExpSmoothing(livestock2).fit()
fit2 = Holt(livestock2).fit()
fit3 = Holt(livestock2,exponential=True).fit()
fit4 = Holt(livestock2,damped=True).fit(damping_slope=0.98)
fit5 = Holt(livestock2,exponential=True,damped=True).fit()
params = ['smoothing_level', 'smoothing_slope', 'damping_slope', 'initial_level', 'initial_slope']
results=pd.DataFrame(index=[r"$\alpha$",r"$\beta$",r"$\phi$",r"$l_0$","$b_0$","SSE"] ,columns=['SES', "Holt's","Exponential", "Additive", "Multiplicative"])
results["SES"] =            [fit1.params[p] for p in params] + [fit1.sse]
results["Holt's"] =         [fit2.params[p] for p in params] + [fit2.sse]
results["Exponential"] =    [fit3.params[p] for p in params] + [fit3.sse]
results["Additive"] =       [fit4.params[p] for p in params] + [fit4.sse]
results["Multiplicative"] = [fit5.params[p] for p in params] + [fit5.sse]
results
[5]:
SES Holt's Exponential Additive Multiplicative
$\alpha$ 1.000000 0.974306 0.977634 0.978826 0.974909
$\beta$ NaN 0.000000 0.000000 0.000000 0.000000
$\phi$ NaN NaN NaN 0.980000 0.981647
$l_0$ 263.918414 258.882566 260.341674 257.355204 258.951884
$b_0$ NaN 5.010780 1.013780 6.644295 1.038144
SSE 6761.350218 6004.138200 6104.194747 6036.555016 6081.995045

Plots of Seasonally Adjusted Data

The following plots allow us to evaluate the level and slope/trend components of the above table’s fits.

[6]:
for fit in [fit2,fit4]:
    pd.DataFrame(np.c_[fit.level,fit.slope]).rename(
        columns={0:'level',1:'slope'}).plot(subplots=True)
plt.show()
print('Figure 7.4: Level and slope components for Holt’s linear trend method and the additive damped trend method.')
../../../_images/examples_notebooks_generated_exponential_smoothing_12_0.png
../../../_images/examples_notebooks_generated_exponential_smoothing_12_1.png
Figure 7.4: Level and slope components for Holt’s linear trend method and the additive damped trend method.

Comparison

Here we plot a comparison Simple Exponential Smoothing and Holt’s Methods for various additive, exponential and damped combinations. All of the models parameters will be optimized by statsmodels.

[7]:
fit1 = SimpleExpSmoothing(livestock2).fit()
fcast1 = fit1.forecast(9).rename("SES")
fit2 = Holt(livestock2).fit()
fcast2 = fit2.forecast(9).rename("Holt's")
fit3 = Holt(livestock2, exponential=True).fit()
fcast3 = fit3.forecast(9).rename("Exponential")
fit4 = Holt(livestock2, damped=True).fit(damping_slope=0.98)
fcast4 = fit4.forecast(9).rename("Additive Damped")
fit5 = Holt(livestock2, exponential=True, damped=True).fit()
fcast5 = fit5.forecast(9).rename("Multiplicative Damped")

ax = livestock2.plot(color="black", marker="o", figsize=(12,8))
livestock3.plot(ax=ax, color="black", marker="o", legend=False)
fcast1.plot(ax=ax, color='red', legend=True)
fcast2.plot(ax=ax, color='green', legend=True)
fcast3.plot(ax=ax, color='blue', legend=True)
fcast4.plot(ax=ax, color='cyan', legend=True)
fcast5.plot(ax=ax, color='magenta', legend=True)
ax.set_ylabel('Livestock, sheep in Asia (millions)')
plt.show()
print('Figure 7.5: Forecasting livestock, sheep in Asia: comparing forecasting performance of non-seasonal methods.')
../../../_images/examples_notebooks_generated_exponential_smoothing_14_0.png
Figure 7.5: Forecasting livestock, sheep in Asia: comparing forecasting performance of non-seasonal methods.

Holt’s Winters Seasonal

Finally we are able to run full Holt’s Winters Seasonal Exponential Smoothing including a trend component and a seasonal component. statsmodels allows for all the combinations including as shown in the examples below: 1. fit1 additive trend, additive seasonal of period season_length=4 and the use of a Box-Cox transformation. 1. fit2 additive trend, multiplicative seasonal of period season_length=4 and the use of a Box-Cox transformation.. 1. fit3 additive damped trend, additive seasonal of period season_length=4 and the use of a Box-Cox transformation. 1. fit4 additive damped trend, multiplicative seasonal of period season_length=4 and the use of a Box-Cox transformation.

The plot shows the results and forecast for fit1 and fit2. The table allows us to compare the results and parameterizations.

[8]:
fit1 = ExponentialSmoothing(aust, seasonal_periods=4, trend='add', seasonal='add').fit(use_boxcox=True)
fit2 = ExponentialSmoothing(aust, seasonal_periods=4, trend='add', seasonal='mul').fit(use_boxcox=True)
fit3 = ExponentialSmoothing(aust, seasonal_periods=4, trend='add', seasonal='add', damped=True).fit(use_boxcox=True)
fit4 = ExponentialSmoothing(aust, seasonal_periods=4, trend='add', seasonal='mul', damped=True).fit(use_boxcox=True)
results=pd.DataFrame(index=[r"$\alpha$",r"$\beta$",r"$\phi$",r"$\gamma$",r"$l_0$","$b_0$","SSE"])
params = ['smoothing_level', 'smoothing_slope', 'damping_slope', 'smoothing_seasonal', 'initial_level', 'initial_slope']
results["Additive"]       = [fit1.params[p] for p in params] + [fit1.sse]
results["Multiplicative"] = [fit2.params[p] for p in params] + [fit2.sse]
results["Additive Dam"]   = [fit3.params[p] for p in params] + [fit3.sse]
results["Multiplica Dam"] = [fit4.params[p] for p in params] + [fit4.sse]

ax = aust.plot(figsize=(10,6), marker='o', color='black', title="Forecasts from Holt-Winters' multiplicative method" )
ax.set_ylabel("International visitor night in Australia (millions)")
ax.set_xlabel("Year")
fit1.fittedvalues.plot(ax=ax, style='--', color='red')
fit2.fittedvalues.plot(ax=ax, style='--', color='green')

fit1.forecast(8).rename('Holt-Winters (add-add-seasonal)').plot(ax=ax, style='--', marker='o', color='red', legend=True)
fit2.forecast(8).rename('Holt-Winters (add-mul-seasonal)').plot(ax=ax, style='--', marker='o', color='green', legend=True)

plt.show()
print("Figure 7.6: Forecasting international visitor nights in Australia using Holt-Winters method with both additive and multiplicative seasonality.")

results
../../../_images/examples_notebooks_generated_exponential_smoothing_16_0.png
Figure 7.6: Forecasting international visitor nights in Australia using Holt-Winters method with both additive and multiplicative seasonality.
[8]:
Additive Multiplicative Additive Dam Multiplica Dam
$\alpha$ 4.546625e-01 3.659925e-01 6.788341e-09 0.000178
$\beta$ 1.554872e-08 1.832850e-20 4.846709e-72 0.000178
$\phi$ NaN NaN 9.431105e-01 0.913344
$\gamma$ 5.243651e-01 2.657584e-14 1.084804e-06 0.000000
$l_0$ 1.421756e+01 1.454805e+01 1.415717e+01 14.535054
$b_0$ 1.307577e-01 1.661245e-01 2.455367e-01 0.483855
SSE 5.001711e+01 4.307070e+01 3.527454e+01 39.683104

The Internals

It is possible to get at the internals of the Exponential Smoothing models.

Here we show some tables that allow you to view side by side the original values \(y_t\), the level \(l_t\), the trend \(b_t\), the season \(s_t\) and the fitted values \(\hat{y}_t\). Note that these values only have meaningful values in the space of your original data if the fit is performed without a Box-Cox transformation.

[9]:
fit1 = ExponentialSmoothing(aust, seasonal_periods=4, trend='add', seasonal='add').fit(use_boxcox=False)
fit2 = ExponentialSmoothing(aust, seasonal_periods=4, trend='add', seasonal='mul').fit(use_boxcox=False)
/home/travis/build/statsmodels/statsmodels/statsmodels/tsa/holtwinters.py:1115: ConvergenceWarning: Optimization failed to converge. Check mle_retvals.
  ConvergenceWarning)
[10]:
df = pd.DataFrame(np.c_[aust, fit1.level, fit1.slope, fit1.season, fit1.fittedvalues],
                  columns=[r'$y_t$',r'$l_t$',r'$b_t$',r'$s_t$',r'$\hat{y}_t$'],index=aust.index)
df.append(fit1.forecast(8).rename(r'$\hat{y}_t$').to_frame(), sort=True)
[10]:
$\hat{y}_t$ $b_t$ $l_t$ $s_t$ $y_t$
2005-01-01 41.945933 1.007997 50.081835 -8.354335 41.7275
2005-04-01 24.571438 0.952927 50.866826 -26.825026 24.0418
2005-07-01 32.805762 0.903262 51.618633 -19.290533 32.3281
2005-10-01 38.039255 0.829382 52.222714 -14.894014 37.3287
2006-01-01 44.697761 0.986951 53.690175 -7.476975 46.2132
2006-04-01 27.852100 1.142311 55.306263 -25.959963 29.3463
2006-07-01 37.158041 1.072113 56.164304 -19.681404 36.4829
2006-10-01 42.342403 1.138168 57.503910 -14.526210 42.9777
2007-01-01 51.165104 0.902808 57.688982 -8.787482 48.9015
2007-04-01 32.631828 0.751874 57.980579 -26.800379 31.1802
2007-07-01 39.051049 0.613259 58.171127 -20.453227 37.7179
2007-10-01 44.258176 0.214202 57.168396 -16.748196 40.4202
2008-01-01 48.595116 0.485765 58.482297 -7.275397 51.2069
2008-04-01 32.167683 0.456601 58.849964 -26.962764 31.8872
2008-07-01 38.853338 0.677546 60.201286 -19.222986 40.9783
2008-10-01 44.130636 0.640308 60.728038 -16.955538 43.7725
2009-01-01 54.092949 0.792701 61.985462 -6.426862 55.5586
2009-04-01 35.815399 0.588440 61.951005 -28.100105 33.8509
2009-07-01 43.316460 0.459504 62.017315 -19.940915 42.0764
2009-10-01 45.521282 0.472087 62.527775 -16.885475 45.6423
2010-01-01 56.572999 0.804165 64.344620 -4.577820 59.7668
2010-04-01 37.048679 0.611105 64.366983 -29.175083 35.1919
2010-07-01 45.037173 0.536505 64.675994 -20.356294 44.3197
2010-10-01 48.327025 0.493529 65.038468 -17.124768 47.9137
2011-01-01 60.954177 NaN NaN NaN NaN
2011-04-01 36.850444 NaN NaN NaN NaN
2011-07-01 46.162762 NaN NaN NaN NaN
2011-10-01 50.127111 NaN NaN NaN NaN
2012-01-01 62.928295 NaN NaN NaN NaN
2012-04-01 38.824561 NaN NaN NaN NaN
2012-07-01 48.136879 NaN NaN NaN NaN
2012-10-01 52.101228 NaN NaN NaN NaN
[11]:
df = pd.DataFrame(np.c_[aust, fit2.level, fit2.slope, fit2.season, fit2.fittedvalues],
                  columns=[r'$y_t$',r'$l_t$',r'$b_t$',r'$s_t$',r'$\hat{y}_t$'],index=aust.index)
df.append(fit2.forecast(8).rename(r'$\hat{y}_t$').to_frame(), sort=True)
[11]:
$\hat{y}_t$ $b_t$ $l_t$ $s_t$ $y_t$
2005-01-01 43.005415 0.912917 51.480945 0.835366 41.7275
2005-04-01 26.352750 0.912917 52.393862 0.502974 24.0418
2005-07-01 33.284547 0.912917 53.306778 0.624396 32.3281
2005-10-01 36.719512 0.912917 54.219695 0.677236 37.3287
2006-01-01 46.055892 0.912917 55.132612 0.835366 46.2132
2006-04-01 28.189443 0.912917 56.045529 0.502974 29.3463
2006-07-01 35.564634 0.912917 56.958446 0.624396 36.4829
2006-10-01 39.192551 0.912917 57.871363 0.677236 42.9777
2007-01-01 49.106370 0.912917 58.784280 0.835366 48.9015
2007-04-01 30.026137 0.912917 59.697196 0.502974 31.1802
2007-07-01 37.844721 0.912917 60.610113 0.624396 37.7179
2007-10-01 41.665593 0.912917 61.523030 0.677236 40.4202
2008-01-01 52.156848 0.912917 62.435947 0.835366 51.2069
2008-04-01 31.862831 0.912917 63.348864 0.502974 31.8872
2008-07-01 40.124808 0.912917 64.261781 0.624396 40.9783
2008-10-01 44.138632 0.912917 65.174697 0.677236 43.7725
2009-01-01 55.207325 0.912917 66.087614 0.835366 55.5586
2009-04-01 33.699525 0.912917 67.000531 0.502974 33.8509
2009-07-01 42.404896 0.912917 67.913448 0.624396 42.0764
2009-10-01 46.611671 0.912917 68.826365 0.677236 45.6423
2010-01-01 58.257803 0.912917 69.739282 0.835366 59.7668
2010-04-01 35.536219 0.912917 70.652199 0.502974 35.1919
2010-07-01 44.684983 0.912917 71.565115 0.624396 44.3197
2010-10-01 49.084710 0.912917 72.478032 0.677236 47.9137
2011-01-01 61.308281 NaN NaN NaN NaN
2011-04-01 37.372912 NaN NaN NaN NaN
2011-07-01 46.965070 NaN NaN NaN NaN
2011-10-01 51.557750 NaN NaN NaN NaN
2012-01-01 64.358759 NaN NaN NaN NaN
2012-04-01 39.209606 NaN NaN NaN NaN
2012-07-01 49.245157 NaN NaN NaN NaN
2012-10-01 54.030789 NaN NaN NaN NaN

Finally lets look at the levels, slopes/trends and seasonal components of the models.

[12]:
states1 = pd.DataFrame(np.c_[fit1.level, fit1.slope, fit1.season], columns=['level','slope','seasonal'], index=aust.index)
states2 = pd.DataFrame(np.c_[fit2.level, fit2.slope, fit2.season], columns=['level','slope','seasonal'], index=aust.index)
fig, [[ax1, ax4],[ax2, ax5], [ax3, ax6]] = plt.subplots(3, 2, figsize=(12,8))
states1[['level']].plot(ax=ax1)
states1[['slope']].plot(ax=ax2)
states1[['seasonal']].plot(ax=ax3)
states2[['level']].plot(ax=ax4)
states2[['slope']].plot(ax=ax5)
states2[['seasonal']].plot(ax=ax6)
plt.show()
../../../_images/examples_notebooks_generated_exponential_smoothing_22_0.png

Simulations and Confidence Intervals

By using a state space formulation, we can perform simulations of future values. The mathematical details are described in Hyndman and Athanasopoulos [2] and in the documentation of HoltWintersResults.simulate.

Similar to the example in [2], we use the model with additive trend, multiplicative seasonality, and multiplicative error. We simulate up to 8 steps into the future, and perform 1000 simulations. As can be seen in the below figure, the simulations match the forecast values quite well.

[2] [Hyndman, Rob J., and George Athanasopoulos. Forecasting: principles and practice, 2nd edition. OTexts, 2018.](https://otexts.com/fpp2/ets.html)

[13]:
fit = ExponentialSmoothing(aust, seasonal_periods=4, trend='add', seasonal='mul').fit()
simulations = fit.simulate(8, repetitions=100, error='mul')

ax = aust.plot(figsize=(10,6), marker='o', color='black',
               title="Forecasts and simulations from Holt-Winters' multiplicative method" )
ax.set_ylabel("International visitor night in Australia (millions)")
ax.set_xlabel("Year")
fit.fittedvalues.plot(ax=ax, style='--', color='green')
simulations.plot(ax=ax, style='-', alpha=0.05, color='grey', legend=False)
fit.forecast(8).rename('Holt-Winters (add-mul-seasonal)').plot(ax=ax, style='--', marker='o', color='green', legend=True)
plt.show()
../../../_images/examples_notebooks_generated_exponential_smoothing_24_0.png

Simulations can also be started at different points in time, and there are multiple options for choosing the random noise.

[14]:
fit = ExponentialSmoothing(aust, seasonal_periods=4, trend='add', seasonal='mul').fit()
simulations = fit.simulate(16, anchor='2009-01-01', repetitions=100, error='mul', random_errors='bootstrap')

ax = aust.plot(figsize=(10,6), marker='o', color='black',
               title="Forecasts and simulations from Holt-Winters' multiplicative method" )
ax.set_ylabel("International visitor night in Australia (millions)")
ax.set_xlabel("Year")
fit.fittedvalues.plot(ax=ax, style='--', color='green')
simulations.plot(ax=ax, style='-', alpha=0.05, color='grey', legend=False)
fit.forecast(8).rename('Holt-Winters (add-mul-seasonal)').plot(ax=ax, style='--', marker='o', color='green', legend=True)
plt.show()
../../../_images/examples_notebooks_generated_exponential_smoothing_26_0.png
[ ]: