700字范文,内容丰富有趣,生活中的好帮手!
700字范文 > 【机器学习】隐马尔可夫模型及其三个基本问题(三)模型参数学习算法及python实现

【机器学习】隐马尔可夫模型及其三个基本问题(三)模型参数学习算法及python实现

时间:2022-07-12 12:25:58

相关推荐

【机器学习】隐马尔可夫模型及其三个基本问题(三)模型参数学习算法及python实现

【机器学习】隐马尔可夫模型及其三个基本问题(三)模型参数学习算法及python实现

一、一些概率与期望值的计算二、非监督学习方法(Baum-Welch算法)三、python实现

隐马尔可夫模型参数学习是根据观测序列X={x1,x2,⋯ ,xn}X = \left\{ {{x_1},{x_2}, \cdots ,{x_n}} \right\}X={x1​,x2​,⋯,xn​}来估计模型λ=[A,B,∏]\lambda = \left[ {A,B,\prod } \right]λ=[A,B,∏]。根据训练数据是包括观测序列和对应的状态序列还是只有观测序列,可以分为监督学习和非监督学习(参考资料1)。因为监督学习需要人工标注训练数据代价较高,因此本博文只介绍非监督学习。

一、一些概率与期望值的计算

(1)给定模型λ\lambdaλ和观测序列XXX,在ttt时刻处于状态qiq_iqi​的概率,记为γt(i)=P(it=qi∣X,λ){\gamma _t}(i) = P\left( {{i_t} = {q_i}\left| {X,\lambda } \right.} \right)γt​(i)=P(it​=qi​∣X,λ)

γt(i)=P(it=qi∣X,λ)=P(it=qi,X∣λ)P(X∣λ){\gamma _t}(i) = P\left( {{i_t} = {q_i}\left| {X,\lambda } \right.} \right) = \frac{{P\left( {{i_t} = {q_i},X\left| \lambda \right.} \right)}}{{P\left( {X\left| \lambda \right.} \right)}}γt​(i)=P(it​=qi​∣X,λ)=P(X∣λ)P(it​=qi​,X∣λ)​

由前向概率αt(i){\alpha _t}(i)αt​(i)和后向概率βt(i){\beta _t}(i)βt​(i)定义可知:

γt(i)=αt(i)βt(i)∑j=1Nαt(j)βt(j){\gamma _t}(i) = \frac{{{\alpha _t}(i){\beta _t}(i)}}{{\sum\limits_{j = 1}^N {{\alpha _t}(j){\beta _t}(j)} }}γt​(i)=j=1∑N​αt​(j)βt​(j)αt​(i)βt​(i)​

(2)给定模型λ\lambdaλ和观测序列XXX,在ttt时刻处于状态qiq_iqi​且t+1t+1t+1时刻处于状态qjq_jqj​的概率,记为

ξt(i,j)=P(it=qi,it+1=qj∣X,λ){\xi _t}(i,j) = P\left( {{i_t} = {q_i},{i_{t + 1}} = {q_j}\left| {X,\lambda } \right.} \right)ξt​(i,j)=P(it​=qi​,it+1​=qj​∣X,λ)

由前向概率αt(i){\alpha _t}(i)αt​(i)和后向概率βt(i){\beta _t}(i)βt​(i)计算:

ξt(i,j)=αt(i)aijbj(xt+1)βt+1(j)∑i=1N∑j=1Nαt(i)aijbj(xt+1)βt+1(j){\xi _t}(i,j) = \frac{{{\alpha _t}(i){a_{ij}}{b_j}({x_{t + 1}}){\beta _{t + 1}}(j)}}{{\sum\limits_{i = 1}^N {\sum\limits_{j = 1}^N {{\alpha _t}(i){a_{ij}}{b_j}({x_{t + 1}}){\beta _{t + 1}}(j)} } }}ξt​(i,j)=i=1∑N​j=1∑N​αt​(i)aij​bj​(xt+1​)βt+1​(j)αt​(i)aij​bj​(xt+1​)βt+1​(j)​

(3)一些有用的期望:

在观测序列XXX下状态iii出现的期望值为:∑t=1nγt(i)\sum\limits_{t = 1}^n {{\gamma _t}(i)}t=1∑n​γt​(i)

在观察序列XXX下状态iii转移的期望值为:∑t=1n−1γt(i)\sum\limits_{t = 1}^{n - 1} {{\gamma _t}(i)}t=1∑n−1​γt​(i)

在观察序列XXX下由状态iii转移到状态jjj的期望值为:∑t=1n−1ξt(i,j)\sum\limits_{t = 1}^{n - 1} {{\xi _t}(i,j)}t=1∑n−1​ξt​(i,j)

二、非监督学习方法(Baum-Welch算法)

本博文不涉及Baum-Welch算法的推导过程,如有需要可以参考资料1。

Baum-Welch算法

输入:观测序列X={x1,x2,⋯ ,xn}X = \left\{ {{x_1},{x_2}, \cdots ,{x_n}} \right\}X={x1​,x2​,⋯,xn​}

输出:隐马尔可夫模型参数λ=[A,B,∏]\lambda = \left[ {A,B,\prod } \right]λ=[A,B,∏]

(1)初始化

迭代次数m = 0时,初始化参数aij(0)a_{ij}^{(0)}aij(0)​,bik(0)b_{ik}^{(0)}bik(0)​,πi(0)\pi _i^{(0)}πi(0)​, 初始化模型λ(0)=[A(0),B(0),∏(0)]{\lambda ^{(0)}} = \left[ {{A^{(0)}},{B^{(0)}},{\prod ^{(0)}}} \right]λ(0)=[A(0),B(0),∏(0)]

(2)递推,m=1,2,⋯m = 1,2, \cdotsm=1,2,⋯

aijm+1=∑t=1n−1ξt(i,j)∑t=1n−1γt(i)a_{ij}^{m + 1} = \frac{{\sum\limits_{t = 1}^{n - 1} {{\xi _t}(i,j)} }}{{\sum\limits_{t = 1}^{n - 1} {{\gamma _t}(i)} }}aijm+1​=t=1∑n−1​γt​(i)t=1∑n−1​ξt​(i,j)​

bjkm+1=∑t=1,xt=vknγt(j)∑t=1nγt(j)b_{jk}^{m + 1} = \frac{{\sum\limits_{t = 1,{x_t} = {v_k}}^n {{\gamma _t}(j)} }}{{\sum\limits_{t = 1}^n {{\gamma _t}(j)} }}bjkm+1​=t=1∑n​γt​(j)t=1,xt​=vk​∑n​γt​(j)​

πim+1=γ1(i)\pi _i^{m + 1} = {\gamma _1}(i)πim+1​=γ1​(i)

上面三个公式的右侧值都是由观测序列和模型λ(m)=[A(m),B(m),∏(m)]{\lambda ^{(m)}} = \left[ {{A^{(m)}},{B^{(m)}},{\prod ^{(m)}}} \right]λ(m)=[A(m),B(m),∏(m)]计算。

(3)终止。

得到模型 λ(m+1)=[A(m+1),B(m+1),∏(m+1)]{\lambda ^{(m+1)}} = \left[ {{A^{(m+1)}},{B^{(m+1)}},{\prod ^{(m+1)}}} \right]λ(m+1)=[A(m+1),B(m+1),∏(m+1)]

三、python实现

代码参考资料2.

import numpy as npclass HMM:def __init__(self, A, B, Pi, O):self.A = np.array(A) # 状态转移概率矩阵self.B = np.array(B) # 观测概率矩阵self.Pi = np.array(Pi) # 初始状态概率矩阵self.O = np.array(O) # 观测序列self.N = self.A.shape[0] # 状态取值个数self.M = self.B.shape[1] # 观测值取值个数## 1. 前向算法def forward(self):n = len(self.O) #观测序列长度alpha = np.zeros((n,self.N)) # 初始化所有alpha值# 计算初始值for i in range(self.N):alpha[0,i] = self.Pi[i] * self.B[i,self.O[0]]# 递推for t in range(n-1):for i in range(self.N):summation = 0 for j in range(self.N):summation += alpha[t,j] * self.A[j,i]alpha[t+1,i] = summation * self.B[i,self.O[t+1]]# 终止Polambda = 0.0 # 在模型确定的情况下观测序列概率for i in range(self.N):Polambda += alpha[n-1,i]return Polambda,alpha## 2. 后向算法def backward(self):n = len(self.O) #观测序列长度beta = np.zeros((n,self.N)) # 初始化所有beta值# 计算初始值for i in range(self.N):beta[n-1,i] = 1.0# 递推for t in range(n-2,-1,-1):for i in range(self.N):summation = 0.0# for every i 'summation' should reset to '0'for j in range(self.N):summation += self.A[i,j] * self.B[j, self.O[t+1]] * beta[t+1,j]beta[t,i] = summation# 终止Polambda = 0.0for i in range(self.N):Polambda += self.Pi[i] * self.B[i, self.O[0]] * beta[0, i]return Polambda,beta## 3.计算概率gamma和zetadef compute_gamma(self,alpha,beta):T = len(self.O)gamma = np.zeros((T, self.N)) # the probability of Ot=qfor t in range(T):for i in range(self.N):gamma[t, i] = alpha[t,i] * beta[t,i] / sum(alpha[t,j] * beta[t,j] for j in range(self.N) )return gammadef compute_zeta(self,alpha,beta):T = len(self.O)zeta = np.zeros((T-1, self.N, self.N)) # note that: not Tfor t in range(T-1): # note: not Tfor i in range(self.N):for j in range(self.N):numerator = alpha[t,i] * self.A[i,j] * self.B[j,self.O[t+1]] * beta[t+1,j]denominator = sum( sum( alpha[t,i1] * self.A[i1,j1] * self.B[j1,self.O[t+1]] * beta[t+1,j1] for j1 in range(self.N) ) # the second sumfor i1 in range(self.N) ) # the first sumzeta[t,i,j] = numerator / denominatorreturn zeta## 4.Baum-Welch算法def Baum_Welch(self):T = len(self.O)V = [k for k in range(self.M)]m = 1 ## 递推指数while m < 10: # xPolambda1, alpha = self.forward() # get alphaPolambda2, beta = self.backward() # get betagamma = pute_gamma(alpha,beta)# use alpha, betaxi = pute_zeta(alpha,beta)# 更新状态转移矩阵Afor i in range(self.N):for j in range(self.N):numerator = sum(xi[t,i,j] for t in range(T-1))denominator = sum(gamma[t,i] for t in range(T-1))self.A[i, j] = numerator / denominator# 更新输出观测矩阵Bfor j in range(self.N):for k in range(self.M):numerator = sum(gamma[t,j] for t in range(T) if self.O[t] == V[k] ) # TBDdenominator = sum(gamma[t,j] for t in range(T))self.B[i, k] = numerator / denominator# 更新初始状态概率分布矩阵for i in range(self.N):self.Pi[i] = gamma[0,i]m += 1return self.A,self.B,self.Piif __name__ == '__main__':print('----------------1.初始化HMM模型参数------------------')A = [[0,1,0,0],[0.4,0,0.6,0],[0,0.4,0,0.6],[0,0,0.5,0.5]]B = [[0.5,0.5],[0.3,0.7],[0.6,0.4],[0.8,0.2]]Pi = [0.25,0.25,0.25,0.25]print('----------------2.观测序列---------------------------')O = [0,1,0]print('----------------3.Baum-Welch算法---------------------')hmm = HMM(A,B,Pi,O)A1,B1,Pi_1 = hmm.Baum_Welch()

参考资料

1、《统计学习方法》李航

2、/d-roger/articles/5719979.html

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。