gov.sandia.cognition.learning.algorithm.hmm Class MarkovChain

```java.lang.Object
gov.sandia.cognition.util.AbstractCloneableSerializable
gov.sandia.cognition.learning.algorithm.hmm.MarkovChain
```
All Implemented Interfaces:
CloneableSerializable, Serializable, Cloneable
Direct Known Subclasses:
HiddenMarkovModel

```@PublicationReference(author="Wikipedia",
title="Markov chain",
type=WebPage,
year=2010,
url="http://en.wikipedia.org/wiki/Markov_chain")
public class MarkovChainextends AbstractCloneableSerializable```

A Markov chain is a random process that has a finite number of states with random transition probabilities between states at discrete time steps.

Since:
3.0
Author:
Kevin R. Dixon
See Also:
Serialized Form

Field Summary
`static int` `DEFAULT_NUM_STATES`
Default number of states, 3.
`protected  Vector` `initialProbability`
Initial probability Vector over the states.
`protected  Matrix` `transitionProbability`
Transition probability matrix.

Constructor Summary
`MarkovChain()`
Default constructor.
`MarkovChain(int numStates)`
Creates a new instance of ContinuousDensityHiddenMarkovModel with uniform initial and transition probabilities.
```MarkovChain(Vector initialProbability, Matrix transitionProbability)```
Creates a new instance of ContinuousDensityHiddenMarkovModel

Method Summary
` MarkovChain` `clone()`
This makes public the clone method on the `Object` class and removes the exception that it throws.
`protected static Vector` `createUniformInitialProbability(int numStates)`
Creates a uniform initial-probability Vector
`protected static Matrix` `createUniformTransitionProbability(int numStates)`
Creates a uniform transition-probability Matrix
` Vector` ```getFutureStateDistribution(Vector current, int numSteps)```
Simulates the Markov chain into the future, given the transition Matrix and the given current state-probability distribution, for the given number of time steps into the future.
` Vector` `getInitialProbability()`
Getter for initialProbability.
` int` `getNumStates()`
Gets the number of states in the HMM.
` Vector` `getSteadyStateDistribution()`
Returns the steady-state distribution of the state distribution.
` Matrix` `getTransitionProbability()`
Getter for transitionProbability.
` void` `normalize()`
Normalizes this Markov chain.
`protected  void` `normalizeTransitionMatrix(Matrix A)`
Normalizes the transition-probability matrix
`protected static void` ```normalizeTransitionMatrix(Matrix A, int j)```
Normalizes a column of the transition-probability matrix
` void` `setInitialProbability(Vector initialProbability)`
Setter for initialProbability
` void` `setTransitionProbability(Matrix transitionProbability)`
Setter for transitionProbability.
` String` `toString()`

Methods inherited from class java.lang.Object
`equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait`

Field Detail

DEFAULT_NUM_STATES

`public static final int DEFAULT_NUM_STATES`
Default number of states, 3.

See Also:
Constant Field Values

initialProbability

`protected Vector initialProbability`
Initial probability Vector over the states. Each entry must be nonnegative and the Vector must sum to 1.

transitionProbability

`protected Matrix transitionProbability`
Transition probability matrix. The entry (i,j) is the probability of transition from state "j" to state "i". As a corollary, all entries in the Matrix must be nonnegative and the columns of the Matrix must sum to 1.

Constructor Detail

MarkovChain

`public MarkovChain()`
Default constructor.

MarkovChain

`public MarkovChain(int numStates)`
Creates a new instance of ContinuousDensityHiddenMarkovModel with uniform initial and transition probabilities. Also uses Dirichlet PDFs as the emission functions.

Parameters:
`numStates` - Number of states to use.

MarkovChain

```public MarkovChain(Vector initialProbability,
Matrix transitionProbability)```
Creates a new instance of ContinuousDensityHiddenMarkovModel

Parameters:
`initialProbability` - Initial probability Vector over the states. Each entry must be nonnegative and the Vector must sum to 1.
`transitionProbability` - Transition probability matrix. The entry (i,j) is the probability of transition from state "j" to state "i". As a corollary, all entries in the Matrix must be nonnegative and the columns of the Matrix must sum to 1.
Method Detail

clone

`public MarkovChain clone()`
Description copied from class: `AbstractCloneableSerializable`
This makes public the clone method on the `Object` class and removes the exception that it throws. Its default behavior is to automatically create a clone of the exact type of object that the clone is called on and to copy all primitives but to keep all references, which means it is a shallow copy. Extensions of this class may want to override this method (but call `super.clone()` to implement a "smart copy". That is, to target the most common use case for creating a copy of the object. Because of the default behavior being a shallow copy, extending classes only need to handle fields that need to have a deeper copy (or those that need to be reset). Some of the methods in `ObjectUtil` may be helpful in implementing a custom clone method. Note: The contract of this method is that you must use `super.clone()` as the basis for your implementation.

Specified by:
`clone` in interface `CloneableSerializable`
Overrides:
`clone` in class `AbstractCloneableSerializable`
Returns:
A clone of this object.

createUniformInitialProbability

`protected static Vector createUniformInitialProbability(int numStates)`
Creates a uniform initial-probability Vector

Parameters:
`numStates` - Number of states to create the Vector for
Returns:
Uniform probability Vector.

createUniformTransitionProbability

`protected static Matrix createUniformTransitionProbability(int numStates)`
Creates a uniform transition-probability Matrix

Parameters:
`numStates` - Number of states to create the Matrix for
Returns:
Uniform probability Matrix.

getInitialProbability

`public Vector getInitialProbability()`
Getter for initialProbability.

Returns:
Initial probability Vector over the states. Each entry must be nonnegative and the Vector must sum to 1.

setInitialProbability

`public void setInitialProbability(Vector initialProbability)`
Setter for initialProbability

Parameters:
`initialProbability` - Initial probability Vector over the states. Each entry must be nonnegative and the Vector must sum to 1.

getTransitionProbability

`public Matrix getTransitionProbability()`
Getter for transitionProbability.

Returns:
Transition probability matrix. The entry (i,j) is the probability of transition from state "j" to state "i". As a corollary, all entries in the Matrix must be nonnegative and the columns of the Matrix must sum to 1.

setTransitionProbability

`public void setTransitionProbability(Matrix transitionProbability)`
Setter for transitionProbability.

Parameters:
`transitionProbability` - Transition probability matrix. The entry (i,j) is the probability of transition from state "j" to state "i". As a corollary, all entries in the Matrix must be nonnegative and the columns of the Matrix must sum to 1.

normalize

`public void normalize()`
Normalizes this Markov chain.

normalizeTransitionMatrix

```protected static void normalizeTransitionMatrix(Matrix A,
int j)```
Normalizes a column of the transition-probability matrix

Parameters:
`A` - Transition probability matrix to normalize, modified by side effect
`j` - Column of the matrix to normalize

normalizeTransitionMatrix

`protected void normalizeTransitionMatrix(Matrix A)`
Normalizes the transition-probability matrix

Parameters:
`A` - Transition probability matrix to normalize, modified by side effect

getNumStates

`public int getNumStates()`
Gets the number of states in the HMM.

Returns:
Number of states in the HMM.

toString

`public String toString()`
Overrides:
`toString` in class `Object`

getSteadyStateDistribution

`public Vector getSteadyStateDistribution()`
Returns the steady-state distribution of the state distribution. This is also the largest eigenvector of the transition-probability matrix, which has an eigenvalue of 1.

Returns:
Steady-state probability distribution of state distribution.

getFutureStateDistribution

```public Vector getFutureStateDistribution(Vector current,
int numSteps)```
Simulates the Markov chain into the future, given the transition Matrix and the given current state-probability distribution, for the given number of time steps into the future.

Parameters:
`current` - Current distribution of probabilities of the various states.
`numSteps` - Number of steps into the future to simulate.
Returns:
State-probability distribution for numSteps into the future, starting from the given state-probability distribution.