statsmodels.gam.generalized_additive_model.GLMGam.select_penweight

GLMGam.select_penweight(criterion='aic', start_params=None, start_model_params=None, method='basinhopping', **fit_kwds)[source]

find alpha by minimizing results criterion

The objective for the minimization can be results attributes like gcv, aic or bic where the latter are based on effective degrees of freedom.

Warning: In many case the optimization might converge to a local optimum or near optimum. Different start_params or using a global optimizer is recommended, default is basinhopping.

Parameters
criterion=’aic’

name of results attribute to be minimized. Default is ‘aic’, other options are ‘gcv’, ‘cv’ or ‘bic’.

start_paramsNone or array

starting parameters for alpha in the penalization weight minimization. The parameters are internally exponentiated and the minimization is with respect to exp(alpha)

start_model_paramsNone or array

starting parameter for the model._fit_pirls.

method‘basinhopping’, ‘nm’ or ‘minimize’

‘basinhopping’ and ‘nm’ directly use the underlying scipy.optimize functions basinhopping and fmin. ‘minimize’ provides access to the high level interface, scipy.optimize.minimize.

fit_kwdskeyword arguments

additional keyword arguments will be used in the call to the scipy optimizer. Which keywords are supported depends on the scipy optimization function.

Returns
alphandarray

penalization parameter found by minimizing the criterion. Note that this can be only a local (near) optimum.

fit_restuple

results returned by the scipy optimization routine. The parameters in the optimization problem are log(alpha)

historydict

history of calls to pirls and contains alpha, the fit criterion and the parameters to which pirls converged to for the given alpha.

Notes

In the test cases Nelder-Mead and bfgs often converge to local optima, see also https://github.com/statsmodels/statsmodels/issues/5381.

This does not use any analytical derivatives for the criterion minimization.

Status: experimental, It is possible that defaults change if there is a better way to find a global optimum. API (e.g. type of return) might also change.