gov.sandia.cognition.learning.algorithm.minimization
Class FunctionMinimizerDFP

java.lang.Object
  extended by gov.sandia.cognition.util.AbstractCloneableSerializable
      extended by gov.sandia.cognition.algorithm.AbstractIterativeAlgorithm
          extended by gov.sandia.cognition.algorithm.AbstractAnytimeAlgorithm<ResultType>
              extended by gov.sandia.cognition.learning.algorithm.AbstractAnytimeBatchLearner<EvaluatorType,InputOutputPair<InputType,OutputType>>
                  extended by gov.sandia.cognition.learning.algorithm.minimization.AbstractAnytimeFunctionMinimizer<Vector,Double,DifferentiableEvaluator<? super Vector,Double,Vector>>
                      extended by gov.sandia.cognition.learning.algorithm.minimization.FunctionMinimizerQuasiNewton
                          extended by gov.sandia.cognition.learning.algorithm.minimization.FunctionMinimizerDFP
All Implemented Interfaces:
AnytimeAlgorithm<InputOutputPair<Vector,Double>>, IterativeAlgorithm, StoppableAlgorithm, AnytimeBatchLearner<DifferentiableEvaluator<? super Vector,Double,Vector>,InputOutputPair<Vector,Double>>, BatchLearner<DifferentiableEvaluator<? super Vector,Double,Vector>,InputOutputPair<Vector,Double>>, FunctionMinimizer<Vector,Double,DifferentiableEvaluator<? super Vector,Double,Vector>>, CloneableSerializable, Serializable, Cloneable

@PublicationReferences(references={@PublicationReference(author="R. Fletcher",title="Practical Methods of Optimization, Second Edition",type=Book,year=1987,pages=54,notes="Section 3.2, Equation 3.2.11"),@PublicationReference(author="Wikipedia",title="Davidon-Fletcher-Powell formula",type=WebPage,year=2008,url="http://en.wikipedia.org/wiki/Davidon-Fletcher-Powell_formula")})
public class FunctionMinimizerDFP
extends FunctionMinimizerQuasiNewton

Implementation of the Davidon-Fletcher-Powell (DFP) formula for a Quasi-Newton minimization update. The DFP formula is generally recognized as inferior to the BFGS update formula (FunctionMinimizerBFGS), and I would recommend using that first. However, there are a few problems that DFP may be more efficient than BFGS, so here it is.

Since:
2.1
Author:
Kevin R. Dixon
See Also:
Serialized Form

Field Summary
 
Fields inherited from class gov.sandia.cognition.learning.algorithm.minimization.FunctionMinimizerQuasiNewton
DEFAULT_LINE_MINIMIZER, DEFAULT_MAX_ITERATIONS, DEFAULT_TOLERANCE
 
Fields inherited from class gov.sandia.cognition.learning.algorithm.minimization.AbstractAnytimeFunctionMinimizer
initialGuess, result, tolerance
 
Fields inherited from class gov.sandia.cognition.learning.algorithm.AbstractAnytimeBatchLearner
data, keepGoing
 
Fields inherited from class gov.sandia.cognition.algorithm.AbstractAnytimeAlgorithm
maxIterations
 
Fields inherited from class gov.sandia.cognition.algorithm.AbstractIterativeAlgorithm
DEFAULT_ITERATION, iteration
 
Constructor Summary
FunctionMinimizerDFP()
          Creates a new instance of FunctionMinimizerBFGS
FunctionMinimizerDFP(LineMinimizer<?> lineMinimizer)
          Creates a new instance of FunctionMinimizerBFGS
FunctionMinimizerDFP(LineMinimizer<?> lineMinimizer, Vector initialGuess, double tolerance, int maxIterations)
          Creates a new instance of FunctionMinimizerBFGS
 
Method Summary
protected  boolean updateHessianInverse(Matrix hessianInverse, Vector delta, Vector gamma)
          The step that makes BFGS/DFP/SR1 different from each other.
 
Methods inherited from class gov.sandia.cognition.learning.algorithm.minimization.FunctionMinimizerQuasiNewton
cleanupAlgorithm, getLineMinimizer, initializeAlgorithm, setLineMinimizer, step
 
Methods inherited from class gov.sandia.cognition.learning.algorithm.minimization.AbstractAnytimeFunctionMinimizer
getInitialGuess, getResult, getTolerance, setInitialGuess, setResult, setTolerance
 
Methods inherited from class gov.sandia.cognition.learning.algorithm.AbstractAnytimeBatchLearner
clone, getData, getKeepGoing, learn, setData, setKeepGoing, stop
 
Methods inherited from class gov.sandia.cognition.algorithm.AbstractAnytimeAlgorithm
getMaxIterations, isResultValid, setMaxIterations
 
Methods inherited from class gov.sandia.cognition.algorithm.AbstractIterativeAlgorithm
addIterativeAlgorithmListener, fireAlgorithmEnded, fireAlgorithmStarted, fireStepEnded, fireStepStarted, getIteration, getListeners, removeIterativeAlgorithmListener, setIteration, setListeners
 
Methods inherited from class java.lang.Object
equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 
Methods inherited from interface gov.sandia.cognition.learning.algorithm.minimization.FunctionMinimizer
learn
 
Methods inherited from interface gov.sandia.cognition.util.CloneableSerializable
clone
 
Methods inherited from interface gov.sandia.cognition.algorithm.AnytimeAlgorithm
getMaxIterations, setMaxIterations
 
Methods inherited from interface gov.sandia.cognition.algorithm.IterativeAlgorithm
addIterativeAlgorithmListener, getIteration, removeIterativeAlgorithmListener
 
Methods inherited from interface gov.sandia.cognition.algorithm.StoppableAlgorithm
isResultValid, stop
 

Constructor Detail

FunctionMinimizerDFP

public FunctionMinimizerDFP()
Creates a new instance of FunctionMinimizerBFGS


FunctionMinimizerDFP

public FunctionMinimizerDFP(LineMinimizer<?> lineMinimizer)
Creates a new instance of FunctionMinimizerBFGS

Parameters:
lineMinimizer - Work-horse algorithm that minimizes the function along a direction

FunctionMinimizerDFP

public FunctionMinimizerDFP(LineMinimizer<?> lineMinimizer,
                            Vector initialGuess,
                            double tolerance,
                            int maxIterations)
Creates a new instance of FunctionMinimizerBFGS

Parameters:
initialGuess - Initial guess about the minimum of the method
tolerance - Tolerance of the minimization algorithm, must be >= 0.0, typically ~1e-10
lineMinimizer - Work-horse algorithm that minimizes the function along a direction
maxIterations - Maximum number of iterations, must be >0, typically ~100
Method Detail

updateHessianInverse

@PublicationReference(author="R. Fletcher",
                      title="Practical Methods of Optimization, Second Edition",
                      type=Book,
                      year=1987,
                      pages=54,
                      notes="Section 3.2, Equation 3.2.11")
protected boolean updateHessianInverse(Matrix hessianInverse,
                                                                                             Vector delta,
                                                                                             Vector gamma)
Description copied from class: FunctionMinimizerQuasiNewton
The step that makes BFGS/DFP/SR1 different from each other. This embodies the specific Quasi-Newton update step

Specified by:
updateHessianInverse in class FunctionMinimizerQuasiNewton
Parameters:
hessianInverse - Current estimate of the Hessian inverse. Must be modified!
delta - Change in the search points (xnew-xold)
gamma - Change in the gradients (gnew-gold)
Returns:
True if Hessian was modified, false otherwise (due to numerical instability, etc.)