Package gov.sandia.cognition.learning.experiment

Provides experiments for validating the performance of learning algorithms.

See:
          Description

Interface Summary
LearningExperiment The LearningExperiment interface defines the general functionality of an object that implements an experiment regarding machine learning algorithms.
LearningExperimentListener The LearningExperimentListener interface defines the functionality of an object that listens to events from a LearningExperiment.
ValidationFoldCreator<InputDataType,FoldDataType> The ValidationFoldCreator interface defines the functionality for an object that can create a collection of folds for a validation experiment where a set of data is split into training and testing sets multiple times.
 

Class Summary
AbstractLearningExperiment The AbstractLearningExperiment class implements the general functionality of the LearningExperiment interface, which is mainly the handling of listeners and firing of events.
AbstractValidationFoldExperiment<InputDataType,FoldDataType> The AbstractValidationFoldExperiment class implements a common way of structuring an experiment around a ValidationFoldCreator object where the fold creator is used to create each of the individual trials of the experiment.
CrossFoldCreator<DataType> The CrossFoldCreator implements a validation fold creator that creates folds for a typical k-fold cross-validation experiment.
LearnerComparisonExperiment<InputDataType,FoldDataType,LearnedType,StatisticType,SummaryType> The LearnerComparisonExperiment compares the performance of two machine learning algorithms to determine (using a statistical test) if the two algorithms have significantly different performance.
LearnerComparisonExperiment.Result<SummaryType> Encapsulates the results of the comparison experiment.
LearnerRepeatExperiment<InputDataType,LearnedType,StatisticType,SummaryType> Runs an experiment where the same learner is evaluated multiple times on the same data.
LearnerValidationExperiment<InputDataType,FoldDataType,LearnedType,StatisticType,SummaryType> The LearnerValidationExperiment class implements an experiment where a supervised machine learning algorithm is evaluated by applying it to a set of folds created from a given set of data.
LeaveOneOutFoldCreator<DataType> The LeaveOneOutFoldCreator class implements the leave-one-out method for creating training-testing folds for a cross-validation experiment.
OnlineLearnerValidationExperiment<DataType,LearnedType,StatisticType,SummaryType> Implements an experiment where an incremental supervised machine learning algorithm is evaluated by applying it to a set of data by successively testing on each item and then training on it.
ParallelLearnerValidationExperiment<InputDataType,FoldDataType,LearnedType,StatisticType,SummaryType> Parallel version of the LearnerValidationExperiment class that executes the validations experiments across available cores and hyperthreads.
RandomByTwoFoldCreator<DataType> A validation fold creator that takes a given collection of data and randomly splits it in half a given number of times, returning two folds for each split, using one half as training and the other half as testing.
RandomFoldCreator<DataType> The RandomFoldCreator class makes use of a randomized data partitioner to create a set number of folds for a set of data by passing the data to the data partitioner multiple times.
SupervisedLearnerComparisonExperiment<InputType,OutputType,StatisticType,SummaryType> A comparison experiment for supervised learners.
SupervisedLearnerValidationExperiment<InputType,OutputType,StatisticType,SummaryType> The SupervisedLearnerValidationExperiment class extends the LearnerValidationExperiment class to provide a easy way to create a learner validation experiment for supervised learning.
 

Package gov.sandia.cognition.learning.experiment Description

Provides experiments for validating the performance of learning algorithms.

Since:
2.0
Author:
Justin Basilico