ETS models

The ETS models are a family of time series models with an underlying state space model consisting of a level component, a trend component (T), a seasonal component (S), and an error term (E).

This notebook shows how they can be used with statsmodels. For a more thorough treatment we refer to [1], chapter 8 (free online resource), on which the implementation in statsmodels and the examples used in this notebook are based.

statsmodels implements all combinations of: - additive and multiplicative error model - additive and multiplicative trend, possibly dampened - additive and multiplicative seasonality

However, not all of these methods are stable. Refer to [1] and references therein for more info about model stability.

[1] Hyndman, Rob J., and George Athanasopoulos. Forecasting: principles and practice, 3rd edition, OTexts, 2019. https://www.otexts.org/fpp3/7

[1]:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
from statsmodels.tsa.exponential_smoothing.ets import ETSModel
[2]:
plt.rcParams['figure.figsize'] = (12, 8)

Simple exponential smoothing

The simplest of the ETS models is also known as simple exponential smoothing. In ETS terms, it corresponds to the (A, N, N) model, that is, a model with additive errors, no trend, and no seasonality. The state space formulation of Holt’s method is:

\begin{align} y_{t} &= y_{t-1} + e_t\\ l_{t} &= l_{t-1} + \alpha e_t\\ \end{align}

This state space formulation can be turned into a different formulation, a forecast and a smoothing equation (as can be done with all ETS models):

\begin{align} \hat{y}_{t|t-1} &= l_{t-1}\\ l_{t} &= \alpha y_{t-1} + (1 - \alpha) l_{t-1} \end{align}

Here, \(\hat{y}_{t|t-1}\) is the forecast/expectation of \(y_t\) given the information of the previous step. In the simple exponential smoothing model, the forecast corresponds to the previous level. The second equation (smoothing equation) calculates the next level as weighted average of the previous level and the previous observation.

[3]:
oildata = [
    111.0091, 130.8284, 141.2871, 154.2278,
    162.7409, 192.1665, 240.7997, 304.2174,
    384.0046, 429.6622, 359.3169, 437.2519,
    468.4008, 424.4353, 487.9794, 509.8284,
    506.3473, 340.1842, 240.2589, 219.0328,
    172.0747, 252.5901, 221.0711, 276.5188,
    271.1480, 342.6186, 428.3558, 442.3946,
    432.7851, 437.2497, 437.2092, 445.3641,
    453.1950, 454.4096, 422.3789, 456.0371,
    440.3866, 425.1944, 486.2052, 500.4291,
    521.2759, 508.9476, 488.8889, 509.8706,
    456.7229, 473.8166, 525.9509, 549.8338,
    542.3405
]
oil = pd.Series(oildata, index=pd.date_range('1965', '2013', freq='AS'))
oil.plot()
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
[3]:
Text(0, 0.5, 'Annual oil production in Saudi Arabia (Mt)')
../../../_images/examples_notebooks_generated_ets_4_1.png

The plot above shows annual oil production in Saudi Arabia in million tonnes. The data are taken from the R package fpp2 (companion package to prior version [1]). Below you can see how to fit a simple exponential smoothing model using statsmodels’s ETS implementation to this data. Additionally, the fit using forecast in R is shown as comparison.

[4]:
model = ETSModel(oil, error='add', trend='add', damped_trend=True)
fit = model.fit(maxiter=10000)
oil.plot(label='data')
fit.fittedvalues.plot(label='statsmodels fit')
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");

# obtained from R
params_R = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params_R).fittedvalues
yhat.plot(label='R fit', linestyle='--')

plt.legend();
[4]:
<matplotlib.legend.Legend at 0x7ff29caa8310>
../../../_images/examples_notebooks_generated_ets_6_1.png

By default the initial states are considered to be fitting parameters and are estimated by maximizing log-likelihood. Additionally it is possible to only use a heuristic for the initial values. In this case this leads to better agreement with the R implementation.

[5]:
model_heuristic = ETSModel(oil, error='add', trend='add', damped_trend=True,
                          initialization_method='heuristic')
fit_heuristic = model_heuristic.fit()
oil.plot(label='data')
fit.fittedvalues.plot(label='estimated')
fit_heuristic.fittedvalues.plot(label='heuristic', linestyle='--')
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");

# obtained from R
params = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params).fittedvalues
yhat.plot(label='with R params', linestyle=':')

plt.legend();
[5]:
<matplotlib.legend.Legend at 0x7ff29ca19c90>
../../../_images/examples_notebooks_generated_ets_8_1.png

The fitted parameters and some other measures are shown using fit.summary(). Here we can see that the log-likelihood of the model using fitted initial states is a bit lower than the one using a heuristic for the initial states. Additionally, we see that \(\beta\) (smoothing_trend) is at the boundary of the default parameter bounds, and therefore it’s not possible to estimate confidence intervals for \(\beta\).

[6]:
print(fit.summary())
                                 ETS Results
==============================================================================
Dep. Variable:                      y   No. Observations:                   49
Model:                      ETS(AAdN)   Log Likelihood                -258.108
Date:                Tue, 02 Feb 2021   AIC                            528.216
Time:                        07:02:07   BIC                            539.567
Sample:                    01-01-1965   HQIC                           532.523
                         - 01-01-2013   Scale                         2202.048
Covariance Type:               approx
===================================================================================
                      coef    std err          z      P>|z|      [0.025      0.975]
-----------------------------------------------------------------------------------
smoothing_level     0.9999      0.136      7.326      0.000       0.732       1.267
smoothing_trend  9.999e-05        nan        nan        nan         nan         nan
damping_trend       0.8798      0.059     14.901      0.000       0.764       0.996
initial_level      54.2331     55.750      0.973      0.331     -55.034     163.500
initial_trend      52.9975     34.392      1.541      0.123     -14.409     120.404
===================================================================================
Ljung-Box (Q):                        1.42   Jarque-Bera (JB):                20.73
Prob(Q):                              0.49   Prob(JB):                         0.00
Heteroskedasticity (H):               0.52   Skew:                            -1.02
Prob(H) (two-sided):                  0.20   Kurtosis:                         5.45
===================================================================================

Warnings:
[1] Covariance matrix calculated using numerical (complex-step) differentiation.
[7]:
print(fit_heuristic.summary())
                                 ETS Results
==============================================================================
Dep. Variable:                      y   No. Observations:                   49
Model:                      ETS(AAdN)   Log Likelihood                -258.881
Date:                Tue, 02 Feb 2021   AIC                            525.761
Time:                        07:02:08   BIC                            533.328
Sample:                    01-01-1965   HQIC                           528.632
                         - 01-01-2013   Scale                         2272.587
Covariance Type:               approx
===================================================================================
                      coef    std err          z      P>|z|      [0.025      0.975]
-----------------------------------------------------------------------------------
smoothing_level     0.9999      0.097     10.289      0.000       0.809       1.190
smoothing_trend     0.1183        nan        nan        nan         nan         nan
damping_trend       0.8000        nan        nan        nan         nan         nan
==============================================
              initialization method: heuristic
----------------------------------------------
initial_level                          33.6309
initial_trend                          34.8115
===================================================================================
Ljung-Box (Q):                        0.27   Jarque-Bera (JB):                19.93
Prob(Q):                              0.88   Prob(JB):                         0.00
Heteroskedasticity (H):               0.50   Skew:                            -0.95
Prob(H) (two-sided):                  0.18   Kurtosis:                         5.48
===================================================================================

Warnings:
[1] Covariance matrix calculated using numerical (complex-step) differentiation.

Holt-Winters’ seasonal method

The exponential smoothing method can be modified to incorporate a trend and a seasonal component. In the additive Holt-Winters’ method, the seasonal component is added to the rest. This model corresponds to the ETS(A, A, A) model, and has the following state space formulation:

\begin{align} y_t &= l_{t-1} + b_{t-1} + s_{t-m} + e_t\\ l_{t} &= l_{t-1} + b_{t-1} + \alpha e_t\\ b_{t} &= b_{t-1} + \beta e_t\\ s_{t} &= s_{t-m} + \gamma e_t \end{align}
[8]:
austourists_data = [
    30.05251300, 19.14849600, 25.31769200, 27.59143700,
    32.07645600, 23.48796100, 28.47594000, 35.12375300,
    36.83848500, 25.00701700, 30.72223000, 28.69375900,
    36.64098600, 23.82460900, 29.31168300, 31.77030900,
    35.17787700, 19.77524400, 29.60175000, 34.53884200,
    41.27359900, 26.65586200, 28.27985900, 35.19115300,
    42.20566386, 24.64917133, 32.66733514, 37.25735401,
    45.24246027, 29.35048127, 36.34420728, 41.78208136,
    49.27659843, 31.27540139, 37.85062549, 38.83704413,
    51.23690034, 31.83855162, 41.32342126, 42.79900337,
    55.70835836, 33.40714492, 42.31663797, 45.15712257,
    59.57607996, 34.83733016, 44.84168072, 46.97124960,
    60.01903094, 38.37117851, 46.97586413, 50.73379646,
    61.64687319, 39.29956937, 52.67120908, 54.33231689,
    66.83435838, 40.87118847, 51.82853579, 57.49190993,
    65.25146985, 43.06120822, 54.76075713, 59.83447494,
    73.25702747, 47.69662373, 61.09776802, 66.05576122,
]
index = pd.date_range("1999-03-01", "2015-12-01", freq="3MS")
austourists = pd.Series(austourists_data, index=index)
austourists.plot()
plt.ylabel('Australian Tourists');
[8]:
Text(0, 0.5, 'Australian Tourists')
../../../_images/examples_notebooks_generated_ets_13_1.png
[9]:
# fit in statsmodels
model = ETSModel(austourists, error="add", trend="add", seasonal="add",
                damped_trend=True, seasonal_periods=4)
fit = model.fit()

# fit with R params
params_R = [
    0.35445427, 0.03200749, 0.39993387, 0.97999997, 24.01278357,
    0.97770147, 1.76951063, -0.50735902, -6.61171798, 5.34956637
]
fit_R = model.smooth(params_R)

austourists.plot(label='data')
plt.ylabel('Australian Tourists')

fit.fittedvalues.plot(label='statsmodels fit')
fit_R.fittedvalues.plot(label='R fit', linestyle='--')
plt.legend();
[9]:
<matplotlib.legend.Legend at 0x7ff29abc2bd0>
../../../_images/examples_notebooks_generated_ets_14_1.png
[10]:
print(fit.summary())
                                 ETS Results
==============================================================================
Dep. Variable:                      y   No. Observations:                   68
Model:                      ETS(AAdA)   Log Likelihood                -152.627
Date:                Tue, 02 Feb 2021   AIC                            327.254
Time:                        07:02:10   BIC                            351.668
Sample:                    03-01-1999   HQIC                           336.928
                         - 12-01-2015   Scale                            5.213
Covariance Type:               approx
======================================================================================
                         coef    std err          z      P>|z|      [0.025      0.975]
--------------------------------------------------------------------------------------
smoothing_level        0.3398      0.111      3.070      0.002       0.123       0.557
smoothing_trend        0.0259      0.008      3.158      0.002       0.010       0.042
smoothing_seasonal     0.4010      0.080      5.041      0.000       0.245       0.557
damping_trend          0.9800        nan        nan        nan         nan         nan
initial_level         29.4475   2.92e+04      0.001      0.999   -5.72e+04    5.72e+04
initial_trend          0.6147      0.392      1.567      0.117      -0.154       1.383
initial_seasonal.0    -3.4379   2.92e+04     -0.000      1.000   -5.72e+04    5.72e+04
initial_seasonal.1    -5.9571   2.92e+04     -0.000      1.000   -5.72e+04    5.72e+04
initial_seasonal.2   -11.4881   2.92e+04     -0.000      1.000   -5.72e+04    5.72e+04
initial_seasonal.3          0   2.92e+04          0      1.000   -5.72e+04    5.72e+04
===================================================================================
Ljung-Box (Q):                        5.76   Jarque-Bera (JB):                 7.69
Prob(Q):                              0.67   Prob(JB):                         0.02
Heteroskedasticity (H):               0.46   Skew:                            -0.63
Prob(H) (two-sided):                  0.07   Kurtosis:                         4.05
===================================================================================

Warnings:
[1] Covariance matrix calculated using numerical (complex-step) differentiation.

Predictions

The ETS model can also be used for predicting. There are several different methods available: - forecast: makes out of sample predictions - predict: in sample and out of sample predictions - simulate: runs simulations of the statespace model - get_prediction: in sample and out of sample predictions, as well as prediction intervals

We can use them on our previously fitted model to predict from 2014 to 2020.

[11]:
pred = fit.get_prediction(start='2014', end='2020')
[12]:
df = pred.summary_frame(alpha=0.05)
df
[12]:
mean pi_lower pi_upper
2014-03-01 67.611048 63.136076 72.086020
2014-06-01 42.814811 38.339839 47.289783
2014-09-01 54.106462 49.631490 58.581434
2014-12-01 57.928276 53.453304 62.403248
2015-03-01 68.422079 63.947107 72.897051
2015-06-01 47.277849 42.802877 51.752821
2015-09-01 58.954740 54.479768 63.429712
2015-12-01 63.982155 59.507183 68.457127
2016-03-01 75.905268 71.430295 80.380240
2016-06-01 51.418122 46.654016 56.182228
2016-09-01 63.703292 58.629393 68.777191
2016-12-01 67.978107 62.575778 73.380436
2017-03-01 78.315764 71.735800 84.895727
2017-06-01 53.780408 46.883520 60.677296
2017-09-01 66.018333 58.788303 73.248362
2017-12-01 70.246847 62.668821 77.824872
2018-03-01 80.539128 71.892785 89.185471
2018-06-01 55.959305 46.968347 64.950264
2018-09-01 68.153652 58.805308 77.501996
2018-12-01 72.339460 62.621963 82.056957
2019-03-01 82.589889 71.864783 93.314995
2019-06-01 57.969051 46.876058 69.062044
2019-09-01 70.123203 58.652201 81.594204
2019-12-01 74.269620 62.411255 86.127984
2020-03-01 84.481446 71.656450 97.306441

In this case the prediction intervals were calculated using an analytical formula. This is not available for all models. For these other models, prediction intervals are calculated by performing multiple simulations (1000 by default) and using the percentiles of the simulation results. This is done internally by the get_prediction method.

We can also manually run simulations, e.g. to plot them. Since the data ranges until end of 2015, we have to simulate from the first quarter of 2016 to the first quarter of 2020, which means 17 steps.

[13]:
simulated = fit.simulate(anchor="end", nsimulations=17, repetitions=100)
[14]:
for i in range(simulated.shape[1]):
    simulated.iloc[:,i].plot(label='_', color='gray', alpha=0.1)
df["mean"].plot(label='mean prediction')
df["pi_lower"].plot(linestyle='--', color='tab:blue', label='95% interval')
df["pi_upper"].plot(linestyle='--', color='tab:blue', label='_')
pred.endog.plot(label='data')
plt.legend()
[14]:
<matplotlib.legend.Legend at 0x7ff29c8541d0>
../../../_images/examples_notebooks_generated_ets_21_1.png

In this case, we chose “end” as simulation anchor, which means that the first simulated value will be the first out of sample value. It is also possible to choose other anchor inside the sample.

[ ]: