Generalized Least Squares =========================== .. _gls_notebook: `Link to Notebook GitHub `_ .. raw:: html
In [ ]:
from __future__ import print_function
   import statsmodels.api as sm
   import numpy as np
   from statsmodels.iolib.table import (SimpleTable, default_txt_fmt)
   

The Longley dataset is a time series dataset:

In [ ]:
data = sm.datasets.longley.load()
   data.exog = sm.add_constant(data.exog)
   print(data.exog[:5])
   

Let's assume that the data is heteroskedastic and that we know the nature of the heteroskedasticity. We can then define sigma and use it to give us a GLS model

First we will obtain the residuals from an OLS fit

In [ ]:
ols_resid = sm.OLS(data.endog, data.exog).fit().resid
   
[[  1.00000000e+00   8.30000000e+01   2.34289000e+05   2.35600000e+03
       1.59000000e+03   1.07608000e+05   1.94700000e+03]
    [  1.00000000e+00   8.85000000e+01   2.59426000e+05   2.32500000e+03
       1.45600000e+03   1.08632000e+05   1.94800000e+03]
    [  1.00000000e+00   8.82000000e+01   2.58054000e+05   3.68200000e+03
       1.61600000e+03   1.09773000e+05   1.94900000e+03]
    [  1.00000000e+00   8.95000000e+01   2.84599000e+05   3.35100000e+03
       1.65000000e+03   1.10929000e+05   1.95000000e+03]
    [  1.00000000e+00   9.62000000e+01   3.28975000e+05   2.09900000e+03
       3.09900000e+03   1.12075000e+05   1.95100000e+03]]
   

Assume that the error terms follow an AR(1) process with a trend:

$\epsilon_i = \beta_0 + \rho\epsilon_{i-1} + \eta_i$

where $\eta \sim N(0,\Sigma^2)$

and that $\rho$ is simply the correlation of the residual a consistent estimator for rho is to regress the residuals on the lagged residuals

In [ ]:
resid_fit = sm.OLS(ols_resid[1:], sm.add_constant(ols_resid[:-1])).fit()
   print(resid_fit.tvalues[1])
   print(resid_fit.pvalues[1])
   

While we don't have strong evidence that the errors follow an AR(1) process we continue

In [ ]:
rho = resid_fit.params[1]
   
-1.43902298397
   0.173784447888
   

As we know, an AR(1) process means that near-neighbors have a stronger relation so we can give this structure by using a toeplitz matrix

In [ ]:
from scipy.linalg import toeplitz
   
   toeplitz(range(5))
   
In [ ]:
order = toeplitz(range(len(ols_resid)))
   

so that our error covariance structure is actually rho**order which defines an autocorrelation structure

In [ ]:
sigma = rho**order
   gls_model = sm.GLS(data.endog, data.exog, sigma=sigma)
   gls_results = gls_model.fit()
   

Of course, the exact rho in this instance is not known so it it might make more sense to use feasible gls, which currently only has experimental support.

We can use the GLSAR model with one lag, to get to a similar result:

In [ ]:
glsar_model = sm.GLSAR(data.endog, data.exog, 1)
   glsar_results = glsar_model.iterative_fit(1)
   print(glsar_results.summary())
   

Comparing gls and glsar results, we see that there are some small differences in the parameter estimates and the resulting standard errors of the parameter estimate. This might be do to the numerical differences in the algorithm, e.g. the treatment of initial conditions, because of the small number of observations in the longley dataset.

In [ ]:
print(gls_results.params)
   print(glsar_results.params)
   print(gls_results.bse)
   print(glsar_results.bse)
   
                           GLSAR Regression Results                           
   ==============================================================================
   Dep. Variable:                      y   R-squared:                       0.996
   Model:                          GLSAR   Adj. R-squared:                  0.992
   Method:                 Least Squares   F-statistic:                     295.2
   Date:                Mon, 20 Jul 2015   Prob (F-statistic):           6.09e-09
   Time:                        17:43:38   Log-Likelihood:                -102.04
   No. Observations:                  15   AIC:                             218.1
   Df Residuals:                       8   BIC:                             223.0
   Df Model:                           6                                         
   Covariance Type:            nonrobust                                         
   ==============================================================================
                    coef    std err          t      P>|t|      [95.0% Conf. Int.]
   ------------------------------------------------------------------------------
   const      -3.468e+06   8.72e+05     -3.979      0.004     -5.48e+06 -1.46e+06
   x1            34.5568     84.734      0.408      0.694      -160.840   229.953
   x2            -0.0343      0.033     -1.047      0.326        -0.110     0.041
   x3            -1.9621      0.481     -4.083      0.004        -3.070    -0.854
   x4            -1.0020      0.211     -4.740      0.001        -1.489    -0.515
   x5            -0.0978      0.225     -0.435      0.675        -0.616     0.421
   x6          1823.1829    445.829      4.089      0.003       795.100  2851.266
   ==============================================================================
   Omnibus:                        1.960   Durbin-Watson:                   2.554
   Prob(Omnibus):                  0.375   Jarque-Bera (JB):                1.423
   Skew:                           0.713   Prob(JB):                        0.491
   Kurtosis:                       2.508   Cond. No.                     4.80e+09
   ==============================================================================
   
   Warnings:
   [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
   [2] The condition number is large, 4.8e+09. This might indicate that there are
   strong multicollinearity or other numerical problems.
   
/Users/tom.augspurger/Envs/py3/lib/python3.4/site-packages/scipy/stats/stats.py:1233: UserWarning: kurtosistest only valid for n>=20 ... continuing anyway, n=15
     int(n))