statsmodels.discrete.discrete_model.MultinomialModel.fit

MultinomialModel.fit(start_params=None, method='newton', maxiter=35, full_output=1, disp=1, callback=None, **kwargs)[source]

Fit the model using maximum likelihood.

The rest of the docstring is from statsmodels.base.model.LikelihoodModel.fit

Fit method for likelihood based models

Parameters:
start_paramsarray_like, optional

Initial guess of the solution for the loglikelihood maximization. The default is an array of zeros.

methodstr, optional

The method determines which solver from scipy.optimize is used, and it can be chosen from among the following strings:

  • ‘newton’ for Newton-Raphson, ‘nm’ for Nelder-Mead

  • ‘bfgs’ for Broyden-Fletcher-Goldfarb-Shanno (BFGS)

  • ‘lbfgs’ for limited-memory BFGS with optional box constraints

  • ‘powell’ for modified Powell’s method

  • ‘cg’ for conjugate gradient

  • ‘ncg’ for Newton-conjugate gradient

  • ‘basinhopping’ for global basin-hopping solver

  • ‘minimize’ for generic wrapper of scipy minimize (BFGS by default)

The explicit arguments in fit are passed to the solver, with the exception of the basin-hopping solver. Each solver has several optional arguments that are not the same across solvers. See the notes section below (or scipy.optimize) for the available arguments and for the list of explicit arguments that the basin-hopping solver supports.

maxiterint, optional

The maximum number of iterations to perform.

full_outputbool, optional

Set to True to have all available output in the Results object’s mle_retvals attribute. The output is dependent on the solver. See LikelihoodModelResults notes section for more information.

dispbool, optional

Set to True to print convergence messages.

fargstuple, optional

Extra arguments passed to the likelihood function, i.e., loglike(x,*args)

callbackcallable callback(xk), optional

Called after each iteration, as callback(xk), where xk is the current parameter vector.

retallbool, optional

Set to True to return list of solutions at each iteration. Available in Results object’s mle_retvals attribute.

skip_hessianbool, optional

If False (default), then the negative inverse hessian is calculated after the optimization. If True, then the hessian will not be calculated. However, it will be available in methods that use the hessian in the optimization (currently only with “newton”).

kwargskeywords

All kwargs are passed to the chosen solver with one exception. The following keyword controls what happens after the fit:

warn_convergence : bool, optional
    If True, checks the model for the converged flag. If the
    converged flag is False, a ConvergenceWarning is issued.

Notes

The ‘basinhopping’ solver ignores maxiter, retall, full_output explicit arguments.

Optional arguments for solvers (see returned Results.mle_settings):

'newton'
    tol : float
        Relative error in params acceptable for convergence.
'nm' -- Nelder Mead
    xtol : float
        Relative error in params acceptable for convergence
    ftol : float
        Relative error in loglike(params) acceptable for
        convergence
    maxfun : int
        Maximum number of function evaluations to make.
'bfgs'
    gtol : float
        Stop when norm of gradient is less than gtol.
    norm : float
        Order of norm (np.Inf is max, -np.Inf is min)
    epsilon
        If fprime is approximated, use this value for the step
        size. Only relevant if LikelihoodModel.score is None.
'lbfgs'
    m : int
        This many terms are used for the Hessian approximation.
    factr : float
        A stop condition that is a variant of relative error.
    pgtol : float
        A stop condition that uses the projected gradient.
    epsilon
        If fprime is approximated, use this value for the step
        size. Only relevant if LikelihoodModel.score is None.
    maxfun : int
        Maximum number of function evaluations to make.
    bounds : sequence
        (min, max) pairs for each element in x,
        defining the bounds on that parameter.
        Use None for one of min or max when there is no bound
        in that direction.
'cg'
    gtol : float
        Stop when norm of gradient is less than gtol.
    norm : float
        Order of norm (np.Inf is max, -np.Inf is min)
    epsilon : float
        If fprime is approximated, use this value for the step
        size. Can be scalar or vector.  Only relevant if
        Likelihoodmodel.score is None.
'ncg'
    fhess_p : callable f'(x,*args)
        Function which computes the Hessian of f times an arbitrary
        vector, p.  Should only be supplied if
        LikelihoodModel.hessian is None.
    avextol : float
        Stop when the average relative error in the minimizer
        falls below this amount.
    epsilon : float or ndarray
        If fhess is approximated, use this value for the step size.
        Only relevant if Likelihoodmodel.hessian is None.
'powell'
    xtol : float
        Line-search error tolerance
    ftol : float
        Relative error in loglike(params) for acceptable for
        convergence.
    maxfun : int
        Maximum number of function evaluations to make.
    start_direc : ndarray
        Initial direction set.
'basinhopping'
    niter : int
        The number of basin hopping iterations.
    niter_success : int
        Stop the run if the global minimum candidate remains the
        same for this number of iterations.
    T : float
        The "temperature" parameter for the accept or reject
        criterion. Higher "temperatures" mean that larger jumps
        in function value will be accepted. For best results
        `T` should be comparable to the separation (in function
        value) between local minima.
    stepsize : float
        Initial step size for use in the random displacement.
    interval : int
        The interval for how often to update the `stepsize`.
    minimizer : dict
        Extra keyword arguments to be passed to the minimizer
        `scipy.optimize.minimize()`, for example 'method' - the
        minimization method (e.g. 'L-BFGS-B'), or 'tol' - the
        tolerance for termination. Other arguments are mapped from
        explicit argument of `fit`:
          - `args` <- `fargs`
          - `jac` <- `score`
          - `hess` <- `hess`
'minimize'
    min_method : str, optional
        Name of minimization method to use.
        Any method specific arguments can be passed directly.
        For a list of methods and their arguments, see
        documentation of `scipy.optimize.minimize`.
        If no method is specified, then BFGS is used.

Last update: Dec 14, 2023