# Tutorial¶

Let us consider chapter 7 of the excellent treatise on the subject of Exponential Smoothing By Hyndman and Athanasopoulos [1]. We will work through all the examples in the chapter as they unfold.

# Exponential smoothing¶

First we load some data. We have included the R data in the notebook for expedience.

In [1]:
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from statsmodels.tsa.api import ExponentialSmoothing, SimpleExpSmoothing, Holt

data = [446.6565,  454.4733,  455.663 ,  423.6322,  456.2713,  440.5881, 425.3325,  485.1494,  506.0482,  526.792 ,  514.2689,  494.211 ]
index= pd.date_range(start='1996', end='2008', freq='A')
oildata = pd.Series(data, index)

data = [17.5534,  21.86  ,  23.8866,  26.9293,  26.8885,  28.8314, 30.0751,  30.9535,  30.1857,  31.5797,  32.5776,  33.4774, 39.0216,  41.3864,  41.5966]
index= pd.date_range(start='1990', end='2005', freq='A')
air = pd.Series(data, index)

data = [263.9177,  268.3072,  260.6626,  266.6394,  277.5158,  283.834 , 290.309 ,  292.4742,  300.8307,  309.2867,  318.3311,  329.3724, 338.884 ,  339.2441,  328.6006,  314.2554,  314.4597,  321.4138, 329.7893,  346.3852,  352.2979,  348.3705,  417.5629,  417.1236, 417.7495,  412.2339,  411.9468,  394.6971,  401.4993,  408.2705, 414.2428]
index= pd.date_range(start='1970', end='2001', freq='A')
livestock2 = pd.Series(data, index)

data = [407.9979 ,  403.4608,  413.8249,  428.105 ,  445.3387,  452.9942, 455.7402]
index= pd.date_range(start='2001', end='2008', freq='A')
livestock3 = pd.Series(data, index)

data = [41.7275,  24.0418,  32.3281,  37.3287,  46.2132,  29.3463, 36.4829,  42.9777,  48.9015,  31.1802,  37.7179,  40.4202, 51.2069,  31.8872,  40.9783,  43.7725,  55.5586,  33.8509, 42.0764,  45.6423,  59.7668,  35.1919,  44.3197,  47.9137]
index= pd.date_range(start='2005', end='2010-Q4', freq='QS-OCT')
aust = pd.Series(data, index)


## Simple Exponential Smoothing¶

Lets use Simple Exponential Smoothing to forecast the below oil data.

In [2]:
ax=oildata.plot()
ax.set_xlabel("Year")
ax.set_ylabel("Oil (millions of tonnes)")
plt.show()
print("Figure 7.1: Oil production in Saudi Arabia from 1996 to 2007.")

Figure 7.1: Oil production in Saudi Arabia from 1996 to 2007.


Here we run three variants of simple exponential smoothing:

1. In fit1 we do not use the auto optimization but instead choose to explicitly provide the model with the $\alpha=0.2$ parameter
2. In fit2 as above we choose an $\alpha=0.6$
3. In fit3 we allow statsmodels to automatically find an optimized $\alpha$ value for us. This is the recommended approach.
In [3]:
fit1 = SimpleExpSmoothing(oildata).fit(smoothing_level=0.2,optimized=False)
fcast1 = fit1.forecast(3).rename(r'$\alpha=0.2$')
fit2 = SimpleExpSmoothing(oildata).fit(smoothing_level=0.6,optimized=False)
fcast2 = fit2.forecast(3).rename(r'$\alpha=0.6$')
fit3 = SimpleExpSmoothing(oildata).fit()
fcast3 = fit3.forecast(3).rename(r'$\alpha=%s$'%fit3.model.params['smoothing_level'])

ax = oildata.plot(marker='o', color='black', figsize=(12,8))
fcast1.plot(marker='o', ax=ax, color='blue', legend=True)
fit1.fittedvalues.plot(marker='o', ax=ax, color='blue')
fcast2.plot(marker='o', ax=ax, color='red', legend=True)

fit2.fittedvalues.plot(marker='o', ax=ax, color='red')
fcast3.plot(marker='o', ax=ax, color='green', legend=True)
fit3.fittedvalues.plot(marker='o', ax=ax, color='green')
plt.show()


## Holt's Method¶

Lets take a look at another example. This time we use air pollution data and the Holt's Method. We will fit three examples again.

1. In fit1 we again choose not to use the optimizer and provide explicit values for $\alpha=0.8$ and $\beta=0.2$
2. In fit2 we do the same as in fit1 but choose to use an exponential model rather than a Holt's additive model.
3. In fit3 we used a damped versions of the Holt's additive model but allow the dampening parameter $\phi$ to be optimized while fixing the values for $\alpha=0.8$ and $\beta=0.2$
In [4]:
fit1 = Holt(air).fit(smoothing_level=0.8, smoothing_slope=0.2, optimized=False)
fcast1 = fit1.forecast(5).rename("Holt's linear trend")
fit2 = Holt(air, exponential=True).fit(smoothing_level=0.8, smoothing_slope=0.2, optimized=False)
fcast2 = fit2.forecast(5).rename("Exponential trend")
fit3 = Holt(air, damped=True).fit(smoothing_level=0.8, smoothing_slope=0.2)

ax = air.plot(color="black", marker="o", figsize=(12,8))
fit1.fittedvalues.plot(ax=ax, color='blue')
fcast1.plot(ax=ax, color='blue', marker="o", legend=True)
fit2.fittedvalues.plot(ax=ax, color='red')
fcast2.plot(ax=ax, color='red', marker="o", legend=True)
fit3.fittedvalues.plot(ax=ax, color='green')
fcast3.plot(ax=ax, color='green', marker="o", legend=True)

plt.show()


Lets look at some seasonally adjusted livestock data. We fit five Holt's models. The below table allows us to compare results when we use exponential versus additive and damped versus non-damped.

Note: fit4 does not allow the parameter $\phi$ to be optimized by providing a fixed value of $\phi=0.98$

In [5]:
fit1 = SimpleExpSmoothing(livestock2).fit()
fit2 = Holt(livestock2).fit()
fit3 = Holt(livestock2,exponential=True).fit()
fit4 = Holt(livestock2,damped=True).fit(damping_slope=0.98)
fit5 = Holt(livestock2,exponential=True,damped=True).fit()
params = ['smoothing_level', 'smoothing_slope', 'damping_slope', 'initial_level', 'initial_slope']
results=pd.DataFrame(index=[r"$\alpha$",r"$\beta$",r"$\phi$",r"$l_0$","$b_0$","SSE"] ,columns=['SES', "Holt's","Exponential", "Additive", "Multiplicative"])
results["SES"] =            [fit1.params[p] for p in params] + [fit1.sse]
results["Holt's"] =         [fit2.params[p] for p in params] + [fit2.sse]
results["Exponential"] =    [fit3.params[p] for p in params] + [fit3.sse]
results["Additive"] =       [fit4.params[p] for p in params] + [fit4.sse]
results["Multiplicative"] = [fit5.params[p] for p in params] + [fit5.sse]
results

Out[5]:
$\alpha$ 1.000000 0.974306 0.977633 0.978847 0.974911
$\beta$ NaN 0.000000 0.000000 0.000000 0.000000
$\phi$ NaN NaN NaN 0.980000 0.981646
$l_0$ 263.917700 258.882647 260.341533 257.357648 258.951944
$b_0$ NaN 5.010775 1.013780 6.644538 1.038144
SSE 6761.350218 6004.138200 6104.194746 6036.555004 6081.995045

### Plots of Seasonally Adjusted Data¶

The following plots allow us to evaluate the level and slope/trend components of the above table's fits.

In [6]:
for fit in [fit2,fit4]:
pd.DataFrame(np.c_[fit.level,fit.slope]).rename(
columns={0:'level',1:'slope'}).plot(subplots=True)
plt.show()
print('Figure 7.4: Level and slope components for Holtâ€™s linear trend method and the additive damped trend method.')

Figure 7.4: Level and slope components for Holtâ€™s linear trend method and the additive damped trend method.


## Comparison¶

Here we plot a comparison Simple Exponential Smoothing and Holt's Methods for various additive, exponential and damped combinations. All of the models parameters will be optimized by statsmodels.

In [7]:
fit1 = SimpleExpSmoothing(livestock2).fit()
fcast1 = fit1.forecast(9).rename("SES")
fit2 = Holt(livestock2).fit()
fcast2 = fit2.forecast(9).rename("Holt's")
fit3 = Holt(livestock2, exponential=True).fit()
fcast3 = fit3.forecast(9).rename("Exponential")
fit4 = Holt(livestock2, damped=True).fit(damping_slope=0.98)
fit5 = Holt(livestock2, exponential=True, damped=True).fit()
fcast5 = fit5.forecast(9).rename("Multiplicative Damped")

ax = livestock2.plot(color="black", marker="o", figsize=(12,8))
livestock3.plot(ax=ax, color="black", marker="o", legend=False)
fcast1.plot(ax=ax, color='red', legend=True)
fcast2.plot(ax=ax, color='green', legend=True)
fcast3.plot(ax=ax, color='blue', legend=True)
fcast4.plot(ax=ax, color='cyan', legend=True)
fcast5.plot(ax=ax, color='magenta', legend=True)
ax.set_ylabel('Livestock, sheep in Asia (millions)')
plt.show()
print('Figure 7.5: Forecasting livestock, sheep in Asia: comparing forecasting performance of non-seasonal methods.')

Figure 7.5: Forecasting livestock, sheep in Asia: comparing forecasting performance of non-seasonal methods.


## Holt's Winters Seasonal¶

Finally we are able to run full Holt's Winters Seasonal Exponential Smoothing including a trend component and a seasonal component. statsmodels allows for all the combinations including as shown in the examples below:

1. fit1 additive trend, additive seasonal of period season_length=4 and the use of a Box-Cox transformation.
2. fit2 additive trend, multiplicative seasonal of period season_length=4 and the use of a Box-Cox transformation..
3. fit3 additive damped trend, additive seasonal of period season_length=4 and the use of a Box-Cox transformation.
4. fit4 additive damped trend, multiplicative seasonal of period season_length=4 and the use of a Box-Cox transformation.

The plot shows the results and forecast for fit1 and fit2. The table allows us to compare the results and parameterizations.

In [8]:
fit1 = ExponentialSmoothing(aust, seasonal_periods=4, trend='add', seasonal='add').fit(use_boxcox=True)
fit2 = ExponentialSmoothing(aust, seasonal_periods=4, trend='add', seasonal='mul').fit(use_boxcox=True)
fit4 = ExponentialSmoothing(aust, seasonal_periods=4, trend='add', seasonal='mul', damped=True).fit(use_boxcox=True)
results=pd.DataFrame(index=[r"$\alpha$",r"$\beta$",r"$\phi$",r"$\gamma$",r"$l_0$","$b_0$","SSE"])
params = ['smoothing_level', 'smoothing_slope', 'damping_slope', 'smoothing_seasonal', 'initial_level', 'initial_slope']
results["Additive"]       = [fit1.params[p] for p in params] + [fit1.sse]
results["Multiplicative"] = [fit2.params[p] for p in params] + [fit2.sse]
results["Additive Dam"]   = [fit3.params[p] for p in params] + [fit3.sse]
results["Multiplica Dam"] = [fit4.params[p] for p in params] + [fit4.sse]

ax = aust.plot(figsize=(10,6), marker='o', color='black', title="Forecasts from Holt-Winters' multiplicative method" )
ax.set_ylabel("International visitor night in Australia (millions)")
ax.set_xlabel("Year")
fit1.fittedvalues.plot(ax=ax, style='--', color='red')
fit2.fittedvalues.plot(ax=ax, style='--', color='green')

fit2.forecast(8).rename('Holt-Winters (add-mul-seasonal)').plot(ax=ax, style='--', marker='o', color='green', legend=True)

plt.show()
print("Figure 7.6: Forecasting international visitor nights in Australia using Holt-Winters method with both additive and multiplicative seasonality.")

results

/home/travis/build/statsmodels/statsmodels/statsmodels/tsa/holtwinters.py:712: ConvergenceWarning: Optimization failed to converge. Check mle_retvals.
ConvergenceWarning)

Figure 7.6: Forecasting international visitor nights in Australia using Holt-Winters method with both additive and multiplicative seasonality.

Out[8]:
$\alpha$ 4.546537e-01 3.659724e-01 5.872063e-09 0.000188
$\beta$ 1.504491e-08 3.972934e-20 2.094129e-10 0.000188
$\phi$ NaN NaN 9.430523e-01 0.913115
$\gamma$ 5.243716e-01 5.723091e-14 1.206545e-08 0.000000
$l_0$ 1.421755e+01 1.454872e+01 1.415783e+01 14.534995
$b_0$ 1.307644e-01 1.661293e-01 2.456396e-01 0.485297
SSE 5.001700e+01 4.307058e+01 3.527502e+01 39.776214

### The Internals¶

It is possible to get at the internals of the Exponential Smoothing models.

Here we show some tables that allow you to view side by side the original values $y_t$, the level $l_t$, the trend $b_t$, the season $s_t$ and the fitted values $\hat{y}_t$.

In [9]:
df = pd.DataFrame(np.c_[aust, fit1.level, fit1.slope, fit1.season, fit1.fittedvalues],
columns=[r'$y_t$',r'$l_t$',r'$b_t$',r'$s_t$',r'$\hat{y}_t$'],index=aust.index)
df.append(fit1.forecast(8).rename(r'$\hat{y}_t$').to_frame(), sort=True)

Out[9]:
$\hat{y}_t$ $b_t$ $l_t$ $s_t$ $y_t$
2005-01-01 41.721328 -34.969391 49.317702 -7.593358 41.7275
2005-04-01 24.190438 -35.452660 49.932295 -25.834452 24.0418
2005-07-01 31.460284 -36.532773 51.126097 -19.182980 32.3281
2005-10-01 36.634512 -37.397682 52.210123 -15.209821 37.3287
2006-01-01 45.097599 -38.467279 53.476780 -7.837278 46.2132
2006-04-01 27.191843 -40.276276 55.513810 -27.017209 29.3463
2006-07-01 36.544173 -40.624970 56.224387 -19.713790 36.4829
2006-10-01 41.449305 -42.041761 57.766058 -15.523322 42.9777
2007-01-01 50.934359 -41.543717 57.536633 -7.588705 48.9015
2007-04-01 31.418089 -42.197769 58.150908 -26.874254 31.1802
2007-07-01 38.718255 -42.303344 58.362831 -20.191826 37.7179
2007-10-01 44.140416 -41.085567 57.181630 -14.982778 40.4202
2008-01-01 49.315600 -42.961372 58.852827 -8.619729 51.2069
2008-04-01 32.306886 -43.186386 59.366800 -27.309047 31.8872
2008-07-01 39.207383 -44.826876 61.095444 -20.925386 40.9783
2008-10-01 44.551337 -44.899206 61.461868 -17.319602 43.7725
2009-01-01 54.357872 -46.192057 62.816594 -7.881490 55.5586
2009-04-01 35.153774 -45.979909 62.831837 -28.447649 33.8509
2009-07-01 43.066270 -46.227866 63.082341 -20.550478 42.0764
2009-10-01 45.871123 -46.852197 63.748482 -17.997475 45.6423
2010-01-01 57.166397 -48.770206 65.777323 -7.371907 59.7668
2010-04-01 36.761393 -48.307971 65.649602 -29.816482 35.1919
2010-07-01 44.932365 -48.798283 66.118988 -21.517119 44.3197
2010-10-01 48.399457 -49.269410 66.666933 -18.521871 47.9137
2011-01-01 61.337652 NaN NaN NaN NaN
2011-04-01 37.242736 NaN NaN NaN NaN
2011-07-01 46.842435 NaN NaN NaN NaN
2011-10-01 51.004912 NaN NaN NaN NaN
2012-01-01 64.470222 NaN NaN NaN NaN
2012-04-01 39.776555 NaN NaN NaN NaN
2012-07-01 49.635412 NaN NaN NaN NaN
2012-10-01 53.900859 NaN NaN NaN NaN
In [10]:
df = pd.DataFrame(np.c_[aust, fit2.level, fit2.slope, fit2.season, fit2.fittedvalues],
columns=[r'$y_t$',r'$l_t$',r'$b_t$',r'$s_t$',r'$\hat{y}_t$'],index=aust.index)
df.append(fit2.forecast(8).rename(r'$\hat{y}_t$').to_frame(), sort=True)

Out[10]:
$\hat{y}_t$ $b_t$ $l_t$ $s_t$ $y_t$
2005-01-01 41.859445 -36.532933 51.247778 0.815848 41.7275
2005-04-01 25.838436 -35.869321 50.739336 0.495350 24.0418
2005-07-01 31.659119 -37.286636 52.063698 0.612960 32.3281
2005-10-01 35.189404 -39.172221 54.190573 0.664150 37.3287
2006-01-01 44.928851 -40.309301 55.709674 0.815007 46.2132
2006-04-01 27.933752 -42.090909 57.760181 0.493030 29.3463
2006-07-01 35.824653 -43.103794 59.131057 0.610066 36.4829
2006-10-01 39.768733 -45.647773 61.911314 0.661674 42.9777
2007-01-01 51.174549 -45.124852 61.860432 0.813548 48.9015
2007-04-01 30.814592 -46.412244 63.139268 0.490275 31.1802
2007-07-01 39.009395 -46.390086 63.331227 0.608198 37.7179
2007-10-01 42.485924 -46.174334 63.147452 0.660411 40.4202
2008-01-01 52.173797 -46.763121 63.705626 0.813341 51.2069
2008-04-01 31.677300 -47.839451 64.874818 0.489531 31.8872
2008-07-01 40.035848 -49.243220 66.471970 0.607648 40.9783
2008-10-01 44.515888 -49.579026 67.069527 0.659550 43.7725
2009-01-01 55.343068 -50.606363 68.194094 0.812723 55.5586
2009-04-01 33.773199 -51.519423 69.289127 0.487856 33.8509
2009-07-01 42.644402 -52.029399 69.975055 0.606348 42.0764
2009-10-01 46.778326 -52.314759 70.370019 0.658661 45.6423
2010-01-01 58.008753 -54.098265 72.216425 0.812245 59.7668
2010-04-01 35.648488 -54.504187 72.914398 0.486496 35.1919
2010-07-01 44.784504 -55.168048 73.687816 0.605373 44.3197
2010-10-01 49.174352 -55.393774 74.034413 0.657793 47.9137
2011-01-01 60.967057 NaN NaN NaN NaN
2011-04-01 36.993774 NaN NaN NaN NaN
2011-07-01 46.712435 NaN NaN NaN NaN
2011-10-01 51.482575 NaN NaN NaN NaN
2012-01-01 64.455823 NaN NaN NaN NaN
2012-04-01 39.017128 NaN NaN NaN NaN
2012-07-01 49.291729 NaN NaN NaN NaN
2012-10-01 54.319826 NaN NaN NaN NaN

Finally lets look at the levels, slopes/trends and seasonal components of the models.

In [11]:
states1 = pd.DataFrame(np.c_[fit1.level, fit1.slope, fit1.season], columns=['level','slope','seasonal'], index=aust.index)
states2 = pd.DataFrame(np.c_[fit2.level, fit2.slope, fit2.season], columns=['level','slope','seasonal'], index=aust.index)
fig, [[ax1, ax4],[ax2, ax5], [ax3, ax6]] = plt.subplots(3, 2, figsize=(12,8))
states1[['level']].plot(ax=ax1)
states1[['slope']].plot(ax=ax2)
states1[['seasonal']].plot(ax=ax3)
states2[['level']].plot(ax=ax4)
states2[['slope']].plot(ax=ax5)
states2[['seasonal']].plot(ax=ax6)
plt.show()