gov.sandia.cognition.learning.algorithm.regression
Class FletcherXuHybridEstimation

java.lang.Object
  extended by gov.sandia.cognition.util.AbstractCloneableSerializable
      extended by gov.sandia.cognition.algorithm.AbstractIterativeAlgorithm
          extended by gov.sandia.cognition.algorithm.AbstractAnytimeAlgorithm<ResultType>
              extended by gov.sandia.cognition.learning.algorithm.AbstractAnytimeBatchLearner<Collection<? extends InputOutputPair<? extends InputType,OutputType>>,ResultType>
                  extended by gov.sandia.cognition.learning.algorithm.AbstractAnytimeSupervisedBatchLearner<Vector,Vector,ResultType>
                      extended by gov.sandia.cognition.learning.algorithm.regression.AbstractParameterCostMinimizer<GradientDescendable,SumSquaredErrorCostFunction>
                          extended by gov.sandia.cognition.learning.algorithm.regression.LeastSquaresEstimator
                              extended by gov.sandia.cognition.learning.algorithm.regression.FletcherXuHybridEstimation
All Implemented Interfaces:
AnytimeAlgorithm<GradientDescendable>, IterativeAlgorithm, MeasurablePerformanceAlgorithm, StoppableAlgorithm, AnytimeBatchLearner<Collection<? extends InputOutputPair<? extends Vector,Vector>>,GradientDescendable>, BatchCostMinimizationLearner<Collection<? extends InputOutputPair<? extends Vector,Vector>>,GradientDescendable>, BatchLearner<Collection<? extends InputOutputPair<? extends Vector,Vector>>,GradientDescendable>, ParameterCostMinimizer<GradientDescendable>, SupervisedBatchLearner<Vector,Vector,GradientDescendable>, CloneableSerializable, Serializable, Cloneable

@PublicationReferences(references={@PublicationReference(author="R. Fletcher",title="Practical Methods of Optimization, Second Edition",type=Book,year=1987,pages={116,117},notes="Section 6.1 motivates the algorithm w.r.t. Gauss-Newton, BFGS, and Levenberg-Marquardt"),@PublicationReference(author={"R. Fletcher","C. Xu"},title="Hybrid Methods for Nonlinear Least Squares",type=Journal,year=1987,pages={371,389},publication="Institute of Mathematics and its Applications Journal of Numerical Analysis")})
public class FletcherXuHybridEstimation
extends LeastSquaresEstimator

The Fletcher-Xu hybrid estimation for solving the nonlinear least-squares parameters. The FX method is a hybrid between the BFGS Quasi-Newton method and the Gauss-Newton minimization procedure. We have slightly modified the original algorithm to choose between BFGS and Levenberg-Marquardt (Tikhonov regularization or ridge regression) update formulae, as the LMA update is more stable than Gauss-Newton and produces better results on my test battery.

The motivation behind FX hybrid is that Gauss-Newton (and Levenberg-Marquardt) has quadratic convergence properties, but has linear convergence when the parameters are far from optimal. BFGS always demonstrates superlinear convergence, even on nonoptimal parameter sets. The FX hybrid attempts to use BFGS when the solutions are far from optimal, but switches to Gauss-Newton (Levenberg-Marquardt in our implementation) when its quadratic convergence properties can be brought to bear.

Generally speaking, FX hybrid is the most efficient and effective parameter-estimation procedure I know of. However, FX requires the storage of an estimated Hessian inverse, which is O(N*N) space, where "N" is the number of parameters to estimate.

Since:
2.1
Author:
Kevin R. Dixon
See Also:
Serialized Form

Field Summary
static double DEFAULT_DAMPING_DIVISOR
          Divisor of the damping factor on unsuccessful iteration, dividing damping factor on cost-reducing iteration 2.0
static LineMinimizer<?> DEFAULT_LINE_MINIMIZER
          Default line minimization algorithm, LineMinimizerDerivativeFree
static double DEFAULT_REDUCTION_TEST
          Reduction test for Equation 6.1.16 in Fletcher PMOO, 0.2 given on page 117.
 
Fields inherited from class gov.sandia.cognition.learning.algorithm.regression.AbstractParameterCostMinimizer
DEFAULT_MAX_ITERATIONS, DEFAULT_TOLERANCE
 
Fields inherited from class gov.sandia.cognition.learning.algorithm.AbstractAnytimeBatchLearner
data, keepGoing
 
Fields inherited from class gov.sandia.cognition.algorithm.AbstractAnytimeAlgorithm
maxIterations
 
Fields inherited from class gov.sandia.cognition.algorithm.AbstractIterativeAlgorithm
DEFAULT_ITERATION, iteration
 
Constructor Summary
FletcherXuHybridEstimation()
          Creates a new instance of FletcherXuHybridEstimation
FletcherXuHybridEstimation(LineMinimizer<?> lineMinimizer)
          Creates a new instance of FletcherXuHybridEstimation
FletcherXuHybridEstimation(LineMinimizer<?> lineMinimizer, double reductionTest)
          Creates a new instance of FletcherXuHybridEstimation
FletcherXuHybridEstimation(LineMinimizer<?> lineMinimizer, double reductionTest, double dampingFactorDivisor)
          Creates a new instance of FletcherXuHybridEstimation
FletcherXuHybridEstimation(LineMinimizer<?> lineMinimizer, double reductionTest, double dampingFactorDivisor, int maxIterations, double tolerance)
          Creates a new instance of FletcherXuHybridEstimation
 
Method Summary
protected  void cleanupAlgorithm()
          Called to clean up the learning algorithm's state after learning has finished.
 double getDampingFactorDivisor()
          Getter for dampingFactorDivisor
 LineMinimizer<?> getLineMinimizer()
          Getter for lineMinimizer
 double getReductionTest()
          Getter for reduction test.
protected  boolean initializeAlgorithm()
          Called to initialize the learning algorithm's state based on the data that is stored in the data field.
 void setDampingFactorDivisor(double dampingFactorDivisor)
          Setter for dampingFactorDivisor
 void setLineMinimizer(LineMinimizer<?> lineMinimizer)
          Setter for lineMinimizer
 void setReductionTest(double reductionTest)
          Setter for reduction test.
protected  boolean step()
          Called to take a single step of the learning algorithm.
 
Methods inherited from class gov.sandia.cognition.learning.algorithm.regression.AbstractParameterCostMinimizer
getCostFunction, getObjectToOptimize, getPerformance, getResult, getResultCost, getTolerance, setCostFunction, setObjectToOptimize, setResult, setResultCost, setTolerance
 
Methods inherited from class gov.sandia.cognition.learning.algorithm.AbstractAnytimeBatchLearner
clone, getData, getKeepGoing, learn, setData, setKeepGoing, stop
 
Methods inherited from class gov.sandia.cognition.algorithm.AbstractAnytimeAlgorithm
getMaxIterations, isResultValid, setMaxIterations
 
Methods inherited from class gov.sandia.cognition.algorithm.AbstractIterativeAlgorithm
addIterativeAlgorithmListener, fireAlgorithmEnded, fireAlgorithmStarted, fireStepEnded, fireStepStarted, getIteration, getListeners, removeIterativeAlgorithmListener, setIteration, setListeners
 
Methods inherited from class java.lang.Object
equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 
Methods inherited from interface gov.sandia.cognition.learning.algorithm.BatchCostMinimizationLearner
learn
 
Methods inherited from interface gov.sandia.cognition.util.CloneableSerializable
clone
 
Methods inherited from interface gov.sandia.cognition.algorithm.AnytimeAlgorithm
getMaxIterations, setMaxIterations
 
Methods inherited from interface gov.sandia.cognition.algorithm.IterativeAlgorithm
addIterativeAlgorithmListener, getIteration, removeIterativeAlgorithmListener
 
Methods inherited from interface gov.sandia.cognition.algorithm.StoppableAlgorithm
isResultValid, stop
 

Field Detail

DEFAULT_REDUCTION_TEST

public static final double DEFAULT_REDUCTION_TEST
Reduction test for Equation 6.1.16 in Fletcher PMOO, 0.2 given on page 117. Lower values result in more Gauss-Newton steps, larger values mean more BFGS steps (1.0), default is 0.2 In my test battery, 0.2 also performs the best.

See Also:
Constant Field Values

DEFAULT_DAMPING_DIVISOR

public static final double DEFAULT_DAMPING_DIVISOR
Divisor of the damping factor on unsuccessful iteration, dividing damping factor on cost-reducing iteration 2.0

See Also:
Constant Field Values

DEFAULT_LINE_MINIMIZER

public static final LineMinimizer<?> DEFAULT_LINE_MINIMIZER
Default line minimization algorithm, LineMinimizerDerivativeFree

Constructor Detail

FletcherXuHybridEstimation

public FletcherXuHybridEstimation()
Creates a new instance of FletcherXuHybridEstimation


FletcherXuHybridEstimation

public FletcherXuHybridEstimation(LineMinimizer<?> lineMinimizer)
Creates a new instance of FletcherXuHybridEstimation

Parameters:
lineMinimizer -

FletcherXuHybridEstimation

public FletcherXuHybridEstimation(LineMinimizer<?> lineMinimizer,
                                  double reductionTest)
Creates a new instance of FletcherXuHybridEstimation

Parameters:
lineMinimizer - Workhorse algorithm that finds the minimum along a particular direction
reductionTest - Reduction test for switching between BFGS and Levenberg-Marquardt, must be [0,1]. Lower values result in more Levenberg-Marquardt steps, larger values result in more BFGS steps.

FletcherXuHybridEstimation

public FletcherXuHybridEstimation(LineMinimizer<?> lineMinimizer,
                                  double reductionTest,
                                  double dampingFactorDivisor)
Creates a new instance of FletcherXuHybridEstimation

Parameters:
lineMinimizer - Workhorse algorithm that finds the minimum along a particular direction
reductionTest - Reduction test for switching between BFGS and Levenberg-Marquardt, must be [0,1]. Lower values result in more Levenberg-Marquardt steps, larger values result in more BFGS steps.
dampingFactorDivisor - Amount to modify the damping factor, typically 2.0 or 10.0

FletcherXuHybridEstimation

public FletcherXuHybridEstimation(LineMinimizer<?> lineMinimizer,
                                  double reductionTest,
                                  double dampingFactorDivisor,
                                  int maxIterations,
                                  double tolerance)
Creates a new instance of FletcherXuHybridEstimation

Parameters:
lineMinimizer - Workhorse algorithm that finds the minimum along a particular direction
reductionTest - Reduction test for switching between BFGS and Levenberg-Marquardt, must be [0,1]. Lower values result in more Levenberg-Marquardt steps, larger values result in more BFGS steps.
dampingFactorDivisor - Amount to modify the damping factor, typically 2.0 or 10.0
maxIterations - Maximum number of iterations before stopping
tolerance - Tolerance of the algorithm.
Method Detail

initializeAlgorithm

protected boolean initializeAlgorithm()
Description copied from class: AbstractAnytimeBatchLearner
Called to initialize the learning algorithm's state based on the data that is stored in the data field. The return value indicates if the algorithm can be run or not based on the initialization.

Specified by:
initializeAlgorithm in class AbstractAnytimeBatchLearner<Collection<? extends InputOutputPair<? extends Vector,Vector>>,GradientDescendable>
Returns:
True if the learning algorithm can be run and false if it cannot.

step

protected boolean step()
Description copied from class: AbstractAnytimeBatchLearner
Called to take a single step of the learning algorithm.

Specified by:
step in class AbstractAnytimeBatchLearner<Collection<? extends InputOutputPair<? extends Vector,Vector>>,GradientDescendable>
Returns:
True if another step can be taken and false it the algorithm should halt.

cleanupAlgorithm

protected void cleanupAlgorithm()
Description copied from class: AbstractAnytimeBatchLearner
Called to clean up the learning algorithm's state after learning has finished.

Specified by:
cleanupAlgorithm in class AbstractAnytimeBatchLearner<Collection<? extends InputOutputPair<? extends Vector,Vector>>,GradientDescendable>

getReductionTest

public double getReductionTest()
Getter for reduction test.

Returns:
Reduction test for switching between BFGS and Levenberg-Marquardt, must be [0,1]. Lower values result in more Levenberg-Marquardt steps, larger values result in more BFGS steps.

setReductionTest

public void setReductionTest(double reductionTest)
Setter for reduction test.

Parameters:
reductionTest - Reduction test for switching between BFGS and Levenberg-Marquardt, must be [0,1]. Lower values result in more Levenberg-Marquardt steps, larger values result in more BFGS steps.

getDampingFactorDivisor

public double getDampingFactorDivisor()
Getter for dampingFactorDivisor

Returns:
Amount to modify the damping factor, typically 2.0 or 10.0

setDampingFactorDivisor

public void setDampingFactorDivisor(double dampingFactorDivisor)
Setter for dampingFactorDivisor

Parameters:
dampingFactorDivisor - Amount to modify the damping factor, typically 2.0 or 10.0

getLineMinimizer

public LineMinimizer<?> getLineMinimizer()
Getter for lineMinimizer

Returns:
Workhorse algorithm that finds the minimum along a particular direction

setLineMinimizer

public void setLineMinimizer(LineMinimizer<?> lineMinimizer)
Setter for lineMinimizer

Parameters:
lineMinimizer - Workhorse algorithm that finds the minimum along a particular direction