Gene Conservation Laboratory
Statistics Program for Analyzing Mixtures (SPAM) Software

SPAM FAQs Page

SPAM software logo

The Importance of Checking Convergence

When the maximum number of search iterations have occurred, SPAM automatically creates the appropriate output files to record the current state of the search, regardless of whether or not the algorithms have converged on the conditional maximum likelihood estimate. There are two reasons to check for convergence before relying on the reported contribution mixture estimates or confidence intervals: (1) to make sure convergence was attained (and the result is trustworthy), and (2) to make sure, when conducting Monte Carlo simulations or Bootstrap resampling, that the same stopping criteria was employed in each simulation/resample. This is required to maintain comparable numerical accuracy among the simulations or resamplings.

  1. SPAM will report an estimate, regardless of whether the optimization routine converged using the GPA criterion, ran for too many iterations, or got stuck in a local maximum (flat area of the likelihood surface). The only way to besure that SPAM converged adequately is to check the log file.
  2. The EM algorithm is one of the optimizers SPAM uses to find the maximum likelihood mixture estimate. It is known that running the EM algorithm with different stopping criteria can produce different maximum likelihood estimates (Seidel et al. 2000). So, for example, if you are running bootstrap replications to estimate a bootstrap confidence interval on a mixture contribution, there may be extra variation in the replicate contribution estimates due to the algorithm sometimes stopping due to the GPA criterion and other times stopping due to the relative likelihood tolerance criterion. For proper estimates, you should make sure all replicates converged using the same criterion (preferably the GPA criterion as the likelihood tolerance criterion tells more about a lack of progress rather than convergence).

How can I check convergence?

  1. Check the *.log file. SPAM provides a number of informative reports regarding why the algorithm stopped searching:

    If you see
    "*** Warning: Searching for the maximum likelihood estimates.
    Exceeded the maximum number of iterations"
    then you should increase the 'maximum # of iterations' in the control parameters section of the control file.

    If you see
    "*** Warning: Searching for the maximum likelihood estimates.
    The conjugate gradient algorithm took too many step reductions"
    then the search may be getting stuck at a local maximum rather than the global maximum. You should set the 'use IRLS algorithm in optimal search' command to TRUE in the optimization options section of the control file to tell SPAM to switch algorithms if it gets stuck.

    Note that when one uses the IRLS algorithm, SPAM initiates the search using the conjugate gradient algorithm until the parameter estimates seem to be close enough to convergence to shift over to the IRLS algorithm. The conj. grad. algorithm uses the change in likelihood (not the parameter estimates) to decide when to halve the steps. So, one may still see 'step halving' warnings in the *.log file from the period before the search switches to the IRLS algorithm.

    If you see
    *** Warning: Searching for the maximum likelihood estimates.
    Relative function convergence. Tolerance = .0000E-09.
    then the search is converging because the relative change in the likelihood was less than the tolerance set in the control file. This suggests the routine possibly got trapped in a local maxima.

    If you see
    *** Note: Searching for the maximum likelihood estimates. Convergence. Guaranteed percent maximum value of the likelihood achieved = 90.2%. then the search is converging because an estimate was found whose likelihood was guaranteed to be greater than the relative threshold set in the control file.

  2. Check the *.est file. Check that the guaranteed 'percent of maximum' estimate exceeds the threshold set for the GPA parameter in the control parameter section of the control file. This value can change in odd ways as the search progresses since the 'current' mixture estimate determines both the 'current' likelihood and the 'current' estimate of the upper bound on the likelihood at the maximum likelihood estimate.

    More informatively, check the estimated Score value for each population. If the search has found the true maximum likelihood estimate, then all of the score values should be 0 (scores = partial derivatives of the likelihood with respect to each of the population mixture parameters). Large positive values (>0.5 or so) signify that SPAM has not attained the global maximum likelihood estimate (small positive values can show up due to numerical issues). Note that the largest score values is marked by an asterisk for easy detection. If there are large positive values, then the algorithm is likely either stuck at a local maximum or not running to convergence. Either increase the number of iterations, tell SPAM to use the IRLS algorithm, and/or use a different initial starting estimate (if you gave one originally). Negative values are not a concern.

  3. Check the *.bot file (if resampling was used). Though the *.log and *.est files are more informative, a common flag that convergence may not have been met is the occurrence of mean bootstrap estimates that differ significantly from the original mixture estimate ('Mean Estimate' versus 'Expected' in the first section of the *.bot file). If this occurs, go check the *.log and *.est files for more clues on whether failed convergence for the original estimate is at fault.

What if the search did not converge?

See the descriptions above for problem-specific recommendations. In general there are 3 possible actions:
==> Increase the maximum number of iterations.
==> Set the 'user IRLS' option to TRUE.
==> Try different initial starting estimates (the default is a uniform probability across populations).
One could also increase the GPA requirement.
The manual currently (Aug, 00, SPAM 3.2) states that one can remove a given covergence criterion by setting it's tolerance to zero (in the case of the parameter estimate tolerance or relative likelihood tolerance) or one hundred (in the case of the GPA). This actually is not true because of the variety of places the algorithm checks for convergence. For example, if you set the relative likelihood tolerance to zero, the algorithm will get caught in an iterative sequence of step-halvings because the change in relative likelihood appears too large to allow acceptance of the next proposed move in the search space. Eventually, too many step-halvings occur and the algorithm halts with an appropriate warning.

Seidel, W., Mosler, K., and Alker, M. 2000. A cautionary note on likelihood ratio tests in mixture models. Annals of the Institute of Statistical Mathematics 52(3): 481-487.