gov.sandia.cognition.learning.algorithm.minimization
Class FunctionMinimizerLiuStorey
java.lang.Object
gov.sandia.cognition.util.AbstractCloneableSerializable
gov.sandia.cognition.algorithm.AbstractIterativeAlgorithm
gov.sandia.cognition.algorithm.AbstractAnytimeAlgorithm<ResultType>
gov.sandia.cognition.learning.algorithm.AbstractAnytimeBatchLearner<EvaluatorType,InputOutputPair<InputType,OutputType>>
gov.sandia.cognition.learning.algorithm.minimization.AbstractAnytimeFunctionMinimizer<Vector,Double,DifferentiableEvaluator<? super Vector,Double,Vector>>
gov.sandia.cognition.learning.algorithm.minimization.FunctionMinimizerConjugateGradient
gov.sandia.cognition.learning.algorithm.minimization.FunctionMinimizerLiuStorey
- All Implemented Interfaces:
- AnytimeAlgorithm<InputOutputPair<Vector,Double>>, IterativeAlgorithm, StoppableAlgorithm, AnytimeBatchLearner<DifferentiableEvaluator<? super Vector,Double,Vector>,InputOutputPair<Vector,Double>>, BatchLearner<DifferentiableEvaluator<? super Vector,Double,Vector>,InputOutputPair<Vector,Double>>, FunctionMinimizer<Vector,Double,DifferentiableEvaluator<? super Vector,Double,Vector>>, CloneableSerializable, Serializable, Cloneable
@PublicationReference(author={"Y. Liu","C. Storey"},
title="Efficient generalized conjugate gradient algorithms, Part 1: theory",
type=Journal,
publication="Journal of Optimization Theory and Applications",
pages={129,137},
year=1991,
notes={"I\'ve seen independent analyses that indicate that this is the most efficient CG algorithm out there.","For example, http://www.ici.ro/camo/neculai/cg.ppt"})
public class FunctionMinimizerLiuStorey
- extends FunctionMinimizerConjugateGradient
This is an implementation of the Liu-Storey conjugate gradient
minimization procedure. This is an unconstrained nonlinear optimization
technique that uses first-order derivative (gradient) information to
determine the direction of exact line searches. This algorithm is generally
considered to be inferior to BFGS, but does not store an NxN Hessian inverse.
Thus, if you have many inputs (N), then the conjugate gradient minimization
may be better than BFGS for your problem. But try BFGS first.
The Liu-Storey CG variant is considered often superior to the CG variant of
Polack-Ribiere. In my experience, they both perform about equally, with
Liu-Storey slightly better. But try both before settling on one.
- Since:
- 2.1
- Author:
- Kevin R. Dixon
- See Also:
- Serialized Form
Method Summary |
protected double |
computeScaleFactor(Vector gradientCurrent,
Vector gradientPrevious)
Computes the conjugate gradient parameter for the particular update
scheme. |
FunctionMinimizerLiuStorey
public FunctionMinimizerLiuStorey()
- Creates a new instance of FunctionMinimizerLiuStorey
FunctionMinimizerLiuStorey
public FunctionMinimizerLiuStorey(LineMinimizer<?> lineMinimizer)
- Creates a new instance of FunctionMinimizerLiuStorey
- Parameters:
lineMinimizer
- Work-horse algorithm that minimizes the function along a direction
FunctionMinimizerLiuStorey
public FunctionMinimizerLiuStorey(LineMinimizer<?> lineMinimizer,
Vector initialGuess,
double tolerance,
int maxIterations)
- Creates a new instance of FunctionMinimizerLiuStorey
- Parameters:
initialGuess
- Initial guess about the minimum of the methodtolerance
- Tolerance of the minimization algorithm, must be >= 0.0, typically ~1e-10lineMinimizer
- Work-horse algorithm that minimizes the function along a directionmaxIterations
- Maximum number of iterations, must be >0, typically ~100
computeScaleFactor
protected double computeScaleFactor(Vector gradientCurrent,
Vector gradientPrevious)
- Description copied from class:
FunctionMinimizerConjugateGradient
- Computes the conjugate gradient parameter for the particular update
scheme.
- Specified by:
computeScaleFactor
in class FunctionMinimizerConjugateGradient
- Parameters:
gradientCurrent
- Gradient at the current evaluation pointgradientPrevious
- Gradient at the previous evaluation point
- Returns:
- "beta" scale factor