Estimating or specifying parameters in state space models

In this notebook we show how to fix specific values of certain parameters in statsmodels’ state space models while estimating others.

In general, state space models allow users to:

  1. Estimate all parameters by maximum likelihood

  2. Fix some parameters and estimate the rest

  3. Fix all parameters (so that no parameters are estimated)

[1]:
%matplotlib inline

from importlib import reload
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt

from pandas_datareader.data import DataReader

To illustrate, we will use the Consumer Price Index for Apparel, which has a time-varying level and a strong seasonal component.

[2]:
endog = DataReader('CPIAPPNS', 'fred', start='1980').asfreq('MS')
endog.plot(figsize=(15, 3));
../../../_images/examples_notebooks_generated_statespace_fixed_params_3_0.png

It is well known (e.g. Harvey and Jaeger [1993]) that the HP filter output can be generated by an unobserved components model given certain restrictions on the parameters.

The unobserved components model is:

\[\begin{split}\begin{aligned} y_t & = \mu_t + \varepsilon_t & \varepsilon_t \sim N(0, \sigma_\varepsilon^2) \\ \mu_t &= \mu_{t-1} + \beta_{t-1} + \eta_t & \eta_t \sim N(0, \sigma_\eta^2) \\ \beta_t &= \beta_{t-1} + \zeta_t & \zeta_t \sim N(0, \sigma_\zeta^2) \\ \end{aligned}\end{split}\]

For the trend to match the output of the HP filter, the parameters must be set as follows:

\[\begin{split}\begin{aligned} \frac{\sigma_\varepsilon^2}{\sigma_\zeta^2} & = \lambda \\ \sigma_\eta^2 & = 0 \end{aligned}\end{split}\]

where \(\lambda\) is the parameter of the associated HP filter. For the monthly data that we use here, it is usually recommended that \(\lambda = 129600\).

[3]:
# Run the HP filter with lambda = 129600
hp_cycle, hp_trend = sm.tsa.filters.hpfilter(endog, lamb=129600)

# The unobserved components model above is the local linear trend, or "lltrend", specification
mod = sm.tsa.UnobservedComponents(endog, 'lltrend')
print(mod.param_names)
['sigma2.irregular', 'sigma2.level', 'sigma2.trend']

The parameters of the unobserved components model (UCM) are written as:

  • \(\sigma_\varepsilon^2 = \text{sigma2.irregular}\)

  • \(\sigma_\eta^2 = \text{sigma2.level}\)

  • \(\sigma_\zeta^2 = \text{sigma2.trend}\)

To satisfy the above restrictions, we will set \((\sigma_\varepsilon^2, \sigma_\eta^2, \sigma_\zeta^2) = (1, 0, 1 / 129600)\).

Since we are fixing all parameters here, we do not need to use the fit method at all, since that method is used to perform maximum likelihood estimation. Instead, we can directly run the Kalman filter and smoother at our chosen parameters using the smooth method.

[4]:
res = mod.smooth([1., 0, 1. / 129600])
print(res.summary())
                        Unobserved Components Results
==============================================================================
Dep. Variable:               CPIAPPNS   No. Observations:                  531
Model:             local linear trend   Log Likelihood               -2983.579
Date:                Thu, 18 Apr 2024   AIC                           5973.158
Time:                        11:35:32   BIC                           5985.971
Sample:                    01-01-1980   HQIC                          5978.173
                         - 03-01-2024
Covariance Type:                  opg
====================================================================================
                       coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------------
sigma2.irregular     1.0000      0.009    115.324      0.000       0.983       1.017
sigma2.level              0      0.000          0      1.000      -0.000       0.000
sigma2.trend      7.716e-06   1.99e-07     38.679      0.000    7.33e-06    8.11e-06
===================================================================================
Ljung-Box (L1) (Q):                 250.83   Jarque-Bera (JB):                 1.71
Prob(Q):                              0.00   Prob(JB):                         0.43
Heteroskedasticity (H):               2.34   Skew:                             0.05
Prob(H) (two-sided):                  0.00   Kurtosis:                         2.74
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).

The estimate that corresponds to the HP filter’s trend estimate is given by the smoothed estimate of the level (which is \(\mu_t\) in the notation above):

[5]:
ucm_trend = pd.Series(res.level.smoothed, index=endog.index)

It is easy to see that the estimate of the smoothed level from the UCM is equal to the output of the HP filter:

[6]:
fig, ax = plt.subplots(figsize=(15, 3))

ax.plot(hp_trend, label='HP estimate')
ax.plot(ucm_trend, label='UCM estimate')
ax.legend();
../../../_images/examples_notebooks_generated_statespace_fixed_params_11_0.png

Adding a seasonal component

However, unobserved components models are more flexible than the HP filter. For example, the data shown above is clearly seasonal, but with time-varying seasonal effects (the seasonality is much weaker at the beginning than at the end). One of the benefits of the unobserved components framework is that we can add a stochastic seasonal component. In this case, we will estimate the variance of the seasonal component by maximum likelihood while still including the restriction on the parameters implied above so that the trend corresponds to the HP filter concept.

Adding the stochastic seasonal component adds one new parameter, sigma2.seasonal.

[7]:
# Construct a local linear trend model with a stochastic seasonal component of period 1 year
mod = sm.tsa.UnobservedComponents(endog, 'lltrend', seasonal=12, stochastic_seasonal=True)
print(mod.param_names)
['sigma2.irregular', 'sigma2.level', 'sigma2.trend', 'sigma2.seasonal']

In this case, we will continue to restrict the first three parameters as described above, but we want to estimate the value of sigma2.seasonal by maximum likelihood. Therefore, we will use the fit method along with the fix_params context manager.

The fix_params method takes a dictionary of parameters names and associated values. Within the generated context, those parameters will be used in all cases. In the case of the fit method, only the parameters that were not fixed will be estimated.

[8]:
# Here we restrict the first three parameters to specific values
with mod.fix_params({'sigma2.irregular': 1, 'sigma2.level': 0, 'sigma2.trend': 1. / 129600}):
    # Now we fit any remaining parameters, which in this case
    # is just `sigma2.seasonal`
    res_restricted = mod.fit()
RUNNING THE L-BFGS-B CODE

           * * *

Machine precision = 2.220D-16
 N =            1     M =           10

At X0         0 variables are exactly at the bounds

At iterate    0    f=  3.88952D+00    |proj g|=  2.26291D-01
 This problem is unconstrained.

At iterate    5    f=  3.26626D+00    |proj g|=  2.25158D-06

           * * *

Tit   = total number of iterations
Tnf   = total number of function evaluations
Tnint = total number of segments explored during Cauchy searches
Skip  = number of BFGS updates skipped
Nact  = number of active bounds at final generalized Cauchy point
Projg = norm of the final projected gradient
F     = final function value

           * * *

   N    Tit     Tnf  Tnint  Skip  Nact     Projg        F
    1      5     12      1     0     0   2.252D-06   3.266D+00
  F =   3.2662596590601871

CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL

Alternatively, we could have simply used the fit_constrained method, which also accepts a dictionary of constraints:

[9]:
res_restricted = mod.fit_constrained({'sigma2.irregular': 1, 'sigma2.level': 0, 'sigma2.trend': 1. / 129600})
RUNNING THE L-BFGS-B CODE

           * * *

Machine precision = 2.220D-16
 N =            1     M =           10

At X0         0 variables are exactly at the bounds

At iterate    0    f=  3.88952D+00    |proj g|=  2.26291D-01

At iterate    5    f=  3.26626D+00    |proj g|=  2.25158D-06

           * * *

Tit   = total number of iterations
Tnf   = total number of function evaluations
Tnint = total number of segments explored during Cauchy searches
Skip  = number of BFGS updates skipped
Nact  = number of active bounds at final generalized Cauchy point
Projg = norm of the final projected gradient
F     = final function value

           * * *

   N    Tit     Tnf  Tnint  Skip  Nact     Projg        F
    1      5     12      1     0     0   2.252D-06   3.266D+00
  F =   3.2662596590601871

CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL
 This problem is unconstrained.

The summary output includes all parameters, but indicates that the first three were fixed (and so were not estimated).

[10]:
print(res_restricted.summary())
                            Unobserved Components Results
=====================================================================================
Dep. Variable:                      CPIAPPNS   No. Observations:                  531
Model:                    local linear trend   Log Likelihood               -1734.384
                   + stochastic seasonal(12)   AIC                           3470.768
Date:                       Thu, 18 Apr 2024   BIC                           3475.018
Time:                               11:35:34   HQIC                          3472.433
Sample:                           01-01-1980
                                - 03-01-2024
Covariance Type:                         opg
============================================================================================
                               coef    std err          z      P>|z|      [0.025      0.975]
--------------------------------------------------------------------------------------------
sigma2.irregular (fixed)     1.0000        nan        nan        nan         nan         nan
sigma2.level (fixed)              0        nan        nan        nan         nan         nan
sigma2.trend (fixed)      7.716e-06        nan        nan        nan         nan         nan
sigma2.seasonal              0.0933      0.007     12.652      0.000       0.079       0.108
===================================================================================
Ljung-Box (L1) (Q):                 455.29   Jarque-Bera (JB):                39.83
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               2.50   Skew:                             0.32
Prob(H) (two-sided):                  0.00   Kurtosis:                         4.20
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).

For comparison, we construct the unrestricted maximum likelihood estimates (MLE). In this case, the estimate of the level will no longer correspond to the HP filter concept.

[11]:
res_unrestricted = mod.fit()
RUNNING THE L-BFGS-B CODE

           * * *

Machine precision = 2.220D-16
 N =            4     M =           10

At X0         0 variables are exactly at the bounds

At iterate    0    f=  3.63648D+00    |proj g|=  1.07130D-01
 This problem is unconstrained.

At iterate    5    f=  1.98435D+00    |proj g|=  2.99231D-01
  ys=-3.222E-01  -gs= 3.185E-01 BFGS update SKIPPED

At iterate   10    f=  1.56733D+00    |proj g|=  7.28698D-01

At iterate   15    f=  1.48593D+00    |proj g|=  1.57938D-01

At iterate   20    f=  1.40444D+00    |proj g|=  3.25266D-01

At iterate   25    f=  1.38763D+00    |proj g|=  1.81951D-02

           * * *

Tit   = total number of iterations
Tnf   = total number of function evaluations
Tnint = total number of segments explored during Cauchy searches
Skip  = number of BFGS updates skipped
Nact  = number of active bounds at final generalized Cauchy point
Projg = norm of the final projected gradient
F     = final function value

           * * *

   N    Tit     Tnf  Tnint  Skip  Nact     Projg        F
    4     28     53      1     1     0   3.894D-06   1.388D+00
  F =   1.3875681402651436

CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL

Finally, we can retrieve the smoothed estimates of the trend and seasonal components.

[12]:
# Construct the smoothed level estimates
unrestricted_trend = pd.Series(res_unrestricted.level.smoothed, index=endog.index)
restricted_trend = pd.Series(res_restricted.level.smoothed, index=endog.index)

# Construct the smoothed estimates of the seasonal pattern
unrestricted_seasonal = pd.Series(res_unrestricted.seasonal.smoothed, index=endog.index)
restricted_seasonal = pd.Series(res_restricted.seasonal.smoothed, index=endog.index)

Comparing the estimated level, it is clear that the seasonal UCM with fixed parameters still produces a trend that corresponds very closely (although no longer exactly) to the HP filter output.

Meanwhile, the estimated level from the model with no parameter restrictions (the MLE model) is much less smooth than these.

[13]:
fig, ax = plt.subplots(figsize=(15, 3))

ax.plot(unrestricted_trend, label='MLE, with seasonal')
ax.plot(restricted_trend, label='Fixed parameters, with seasonal')
ax.plot(hp_trend, label='HP filter, no seasonal')
ax.legend();
../../../_images/examples_notebooks_generated_statespace_fixed_params_26_0.png

Finally, the UCM with the parameter restrictions is still able to pick up the time-varying seasonal component quite well.

[14]:
fig, ax = plt.subplots(figsize=(15, 3))

ax.plot(unrestricted_seasonal, label='MLE')
ax.plot(restricted_seasonal, label='Fixed parameters')
ax.legend();
../../../_images/examples_notebooks_generated_statespace_fixed_params_28_0.png

Last update: Apr 18, 2024