statsmodels.tsa.statespace.dynamic_factor.DynamicFactor.fit¶
method

DynamicFactor.
fit
(start_params=None, transformed=True, cov_type='opg', cov_kwds=None, method='lbfgs', maxiter=50, full_output=1, disp=5, callback=None, return_params=False, optim_score=None, optim_complex_step=None, optim_hessian=None, flags=None, **kwargs)¶ Fits the model by maximum likelihood via Kalman filter.
 Parameters
 start_paramsarray_like, optional
Initial guess of the solution for the loglikelihood maximization. If None, the default is given by Model.start_params.
 transformedboolean, optional
Whether or not start_params is already transformed. Default is True.
 cov_typestr, optional
The cov_type keyword governs the method for calculating the covariance matrix of parameter estimates. Can be one of:
‘opg’ for the outer product of gradient estimator
‘oim’ for the observed information matrix estimator, calculated using the method of Harvey (1989)
‘approx’ for the observed information matrix estimator, calculated using a numerical approximation of the Hessian matrix.
‘robust’ for an approximate (quasimaximum likelihood) covariance matrix that may be valid even in the presense of some misspecifications. Intermediate calculations use the ‘oim’ method.
‘robust_approx’ is the same as ‘robust’ except that the intermediate calculations use the ‘approx’ method.
‘none’ for no covariance matrix calculation.
 cov_kwdsdict or None, optional
A dictionary of arguments affecting covariance matrix computation.
opg, oim, approx, robust, robust_approx
‘approx_complex_step’ : boolean, optional  If True, numerical approximations are computed using complexstep methods. If False, numerical approximations are computed using finite difference methods. Default is True.
‘approx_centered’ : boolean, optional  If True, numerical approximations computed using finite difference methods use a centered approximation. Default is False.
 methodstr, optional
The method determines which solver from scipy.optimize is used, and it can be chosen from among the following strings:
‘newton’ for NewtonRaphson, ‘nm’ for NelderMead
‘bfgs’ for BroydenFletcherGoldfarbShanno (BFGS)
‘lbfgs’ for limitedmemory BFGS with optional box constraints
‘powell’ for modified Powell’s method
‘cg’ for conjugate gradient
‘ncg’ for Newtonconjugate gradient
‘basinhopping’ for global basinhopping solver
The explicit arguments in fit are passed to the solver, with the exception of the basinhopping solver. Each solver has several optional arguments that are not the same across solvers. See the notes section below (or scipy.optimize) for the available arguments and for the list of explicit arguments that the basinhopping solver supports.
 maxiterint, optional
The maximum number of iterations to perform.
 full_outputboolean, optional
Set to True to have all available output in the Results object’s mle_retvals attribute. The output is dependent on the solver. See LikelihoodModelResults notes section for more information.
 dispboolean, optional
Set to True to print convergence messages.
 callbackcallable callback(xk), optional
Called after each iteration, as callback(xk), where xk is the current parameter vector.
 return_paramsboolean, optional
Whether or not to return only the array of maximizing parameters. Default is False.
 optim_score{‘harvey’, ‘approx’} or None, optional
The method by which the score vector is calculated. ‘harvey’ uses the method from Harvey (1989), ‘approx’ uses either finite difference or complex step differentiation depending upon the value of optim_complex_step, and None uses the builtin gradient approximation of the optimizer. Default is None. This keyword is only relevant if the optimization method uses the score.
 optim_complex_stepbool, optional
Whether or not to use complex step differentiation when approximating the score; if False, finite difference approximation is used. Default is True. This keyword is only relevant if optim_score is set to ‘harvey’ or ‘approx’.
 optim_hessian{‘opg’,’oim’,’approx’}, optional
The method by which the Hessian is numerically approximated. ‘opg’ uses outer product of gradients, ‘oim’ uses the information matrix formula from Harvey (1989), and ‘approx’ uses numerical approximation. This keyword is only relevant if the optimization method uses the Hessian matrix.
 **kwargs
Additional keyword arguments to pass to the optimizer.
 Returns
 MLEResults
See also
statsmodels.base.model.LikelihoodModel.fit
,MLEResults