3. Usage

3.1. Initialization

class hidden_markov.hmm(states, observations, start_prob, trans_prob, em_prob)

Stores a hidden markov model object, and the model parameters.

Implemented Algorithms :

  • Viterbi Algorithm
  • Forward Algorithm
  • Baum-Welch Algorithm

Initialize The hmm class object.

Arguments:

Parameters:
  • states (A list or tuple) – The set of hidden states
  • observations (A list or tuple) – The set unique of possible observations
  • start_prob (Numpy matrix, dimension = [ length(states) X 1 ]) – The start probabilities of various states, given in same order as ‘states’ variable. start_prob[i] = probability( start at states[i] ).
  • trans_prob (Numpy matrix, dimension = [ len(states) X len(states) ]) – The transition probabilities, with ordering same as ‘states’ variable . trans_prob[i,j] = probability(states[i] -> states[j]).
  • em_prob (Numpy matrix, dimension = [ len(states) X len(observations) ]) – The emission probabilities, with ordering same as ‘states’ variable and ‘observations’ variable. em_prob[i,j] = probability(states[i],observations[j]).

Example:

>>> states = ('s', 't')
>>> possible_observation = ('A','B' )
>>> # Numpy arrays of the data
>>> start_probability = np.matrix( '0.5 0.5 ')
>>> transition_probability = np.matrix('0.6 0.4 ;  0.3 0.7 ')
>>> emission_probability = np.matrix( '0.3 0.7 ; 0.4 0.6 ' )
>>> test = hmm(states,possible_observation,start_probability,transition_probability,emission_probability)

3.2. Viterbi Algorithm

hmm.viterbi(observations)

The probability of occurence of the observation sequence

Arguments:

Parameters:observations (A list or tuple) – The observation sequence, where each element belongs to ‘observations’ variable declared with __init__ object.
Returns:Returns a list of hidden states.
Return type:list of states

Features:

Scaling applied here. This ensures that no underflow error occurs.

Example:

>>> states = ('s', 't')
>>> possible_observation = ('A','B' )
>>> # Numpy arrays of the data
>>> start_probability = np.matrix( '0.5 0.5 ')
>>> transition_probability = np.matrix('0.6 0.4 ;  0.3 0.7 ')
>>> emission_probability = np.matrix( '0.3 0.7 ; 0.4 0.6 ' )
>>> # Initialize class object
>>> test = hmm(states,possible_observation,start_probability,transition_probability,emission_probability)
>>> observations = ('A', 'B','B','A')
>>> print(test.viterbi(observations))

3.3. Forward Algorithm

hmm.forward_algo(observations)

Finds the probability of an observation sequence for given model parameters

Arguments:

Parameters:observations (A list or tuple) – The observation sequence, where each element belongs to ‘observations’ variable declared with __init__ object.
Returns:The probability of occurence of the observation sequence
Return type:float

Example:

>>> states = ('s', 't')
>>> possible_observation = ('A','B' )
>>> # Numpy arrays of the data
>>> start_probability = np.matrix( '0.5 0.5 ')
>>> transition_probability = np.matrix('0.6 0.4 ;  0.3 0.7 ')
>>> emission_probability = np.matrix( '0.3 0.7 ; 0.4 0.6 ' )
>>> # Initialize class object
>>> test = hmm(states,possible_observation,start_probability,transition_probability,emission_probability)
>>> observations = ('A', 'B','B','A')
>>> print(test.forward_algo(observations))

Note

No scaling applied here and hence this routine is susceptible to underflow errors. Use hmm.log_prob() instead.

3.4. Baum-Welch Algorithm

hmm.train_hmm(observation_list, iterations, quantities)

Runs the Baum Welch Algorithm and finds the new model parameters

Arguments:

Parameters:
  • observation_list (Contains a list multiple observation sequences.) – A nested list, or a list of lists
  • iterations (An integer) – Maximum number of iterations for the algorithm
  • quantities (A list of integers) – Number of times, each corresponding item in ‘observation_list’ occurs.
Returns:

Returns the emission, transition and start probabilites as numpy matrices

Return type:

Three numpy matices

Features:

Scaling applied here. This ensures that no underflow error occurs.

Example:

>>> states = ('s', 't')
>>> possible_observation = ('A','B' )
>>> # Numpy arrays of the data
>>> start_probability = np.matrix( '0.5 0.5 ')
>>> transition_probability = np.matrix('0.6 0.4 ;  0.3 0.7 ')
>>> emission_probability = np.matrix( '0.3 0.7 ; 0.4 0.6 ' )
>>> # Initialize class object
>>> test = hmm(states,possible_observation,start_probability,transition_probability,emission_probability)
>>> 
>>> observations = ('A', 'B','B','A')
>>> obs4 = ('B', 'A','B')
>>> observation_tuple = []
>>> observation_tuple.extend( [observations,obs4] )
>>> quantities_observations = [10, 20]
>>> num_iter=1000
>>> e,t,s = test.train_hmm(observation_tuple,num_iter,quantities_observations)
>>> # e,t,s contain new emission transition and start probabilities

3.5. Log-Probability Forward algorithm

hmm.log_prob(observations_list, quantities)

Finds Weighted log probability of a list of observation sequences

Arguments:

Parameters:
  • observation_list (Contains a list multiple observation sequences.) – A nested list, or a list of lists
  • quantities (A list of integers) – Number of times, each corresponding item in ‘observation_list’ occurs.
Returns:

Weighted log probability of multiple observations.

Return type:

float

Features:

Scaling applied here. This ensures that no underflow error occurs.

Example:

>>> states = ('s', 't')
>>> possible_observation = ('A','B' )
>>> # Numpy arrays of the data
>>> start_probability = np.matrix( '0.5 0.5 ')
>>> transition_probability = np.matrix('0.6 0.4 ;  0.3 0.7 ')
>>> emission_probability = np.matrix( '0.3 0.7 ; 0.4 0.6 ' )
>>> # Initialize class object
>>> test = hmm(states,possible_observation,start_probability,transition_probability,emission_probability)
>>> observations = ('A', 'B','B','A')
>>> obs4 = ('B', 'A','B')
>>> observation_tuple = []
>>> observation_tuple.extend( [observations,obs4] )
>>> quantities_observations = [10, 20]
>>>
>>> prob = test.log_prob(observation_tuple, quantities_observations)