statsmodels.gam.generalized_additive_model.GLMGam.fit_regularized

GLMGam.fit_regularized(method='elastic_net', alpha=0.0, start_params=None, refit=False, opt_method='bfgs', **kwargs)

Return a regularized fit to a linear regression model.

Parameters:
method{‘elastic_net’}

Only the elastic_net approach is currently implemented.

alphascalar or array_like

The penalty weight. If a scalar, the same penalty weight applies to all variables in the model. If a vector, it must have the same length as params, and contains a penalty weight for each coefficient.

start_paramsarray_like

Starting values for params.

refitbool

If True, the model is refit using only the variables that have non-zero coefficients in the regularized fit. The refitted model is not regularized.

opt_methodstr

The method used for numerical optimization.

**kwargs

Additional keyword arguments used when fitting the model.

Returns:
GLMResults

An array or a GLMResults object, same type returned by fit.

Notes

The penalty is the elastic net penalty, which is a combination of L1 and L2 penalties.

The function that is minimized is:

\[-loglike/n + alpha*((1-L1\_wt)*|params|_2^2/2 + L1\_wt*|params|_1)\]

where \(|*|_1\) and \(|*|_2\) are the L1 and L2 norms.

Post-estimation results are based on the same data used to select variables, hence may be subject to overfitting biases.

The elastic_net method uses the following keyword arguments:

maxiterint

Maximum number of iterations

L1_wtfloat

Must be in [0, 1]. The L1 penalty has weight L1_wt and the L2 penalty has weight 1 - L1_wt.

cnvrg_tolfloat

Convergence threshold for maximum parameter change after one sweep through all coefficients.

zero_tolfloat

Coefficients below this threshold are treated as zero.