上一节介绍了使用前向后向算法求解基于链式条件随机场中某隐状态的边缘概率分布,本节将介绍条件随机场中模型参数的学习任务。
条件随机场使用的数据集合是一个序列信息。以词性标注为例,已知数据集合X\mathcal XX以及对应的序列标注标签Y\mathcal YY表示如下:
其中
TTT表示序列长度。
X={x(1),x(2),⋯,x(N)}Y={y(1),y(2),⋯,y(N)}x(i),y(i)∈RT;i=1,2,⋯,N\begin{aligned} \mathcal X = \{x^{(1)},x^{(2)},\cdots,x^{(N)}\} \\ \mathcal Y = \{y^{(1)},y^{(2)},\cdots,y^{(N)}\} \\ x^{(i)},y^{(i)} \in \mathbb R^T;i=1,2,\cdots,N \end{aligned}X={x(1),x(2),⋯,x(N)}Y={y(1),y(2),⋯,y(N)}x(i),y(i)∈RT;i=1,2,⋯,N
推断任务中的边缘概率分布的求解问题即:模型给定的条件下,对于陌生样本序列x1:Tx_{1:T}x1:T,求解其词性标注序列某一时刻结果yty_tyt的条件概率:
这里设定隐状态的取值是‘离散型随机变量’,如[名词,动词,形容词,副词,...],它的取值集合用
K\mathcal KK表示。
P(yt=j∣x1:T){t∈{1,2,⋯T}j∈K={1,2,⋯,K}\mathcal P(y_t=j \mid x_{1:T}) \begin{cases} t \in \{1,2,\cdots T\} \\ j \in \mathcal K = \{1,2,\cdots,\mathcal K\} \end{cases}P(yt=j∣x1:T){t∈{1,2,⋯T}j∈K={1,2,⋯,K}
前向后向算法(Forward-Backward Algorithm)对应概率图模型描述表示如下:
将P(yt∣x1:T)\mathcal P(y_t \mid x_{1:T})P(yt∣x1:T)的因子分解求解过程划分为两部分:
其中
Z\mathcal ZZ表示配分函数~
P(yt∣x1:T)=∑y1,⋯,yt−1∑yt+1,⋯,yT1Z∏t=1T−1ψt(yt,yt+1,x1:T)=1Z[∑y1,⋯yt−1∏i=1t−1ψi(yi,yi+1,x1:T)]⋅[∑yt+1,⋯,yT∏i=tT−1ψi(yi,yi+1,x1:T)]=1ZΔleft⋅Δright\begin{aligned} \mathcal P(y_t \mid x_{1:T}) & = \sum_{y_1,\cdots,y_{t-1}} \sum_{y_{t+1},\cdots,y_T}\frac{1}{\mathcal Z} \prod_{t=1}^{T-1} \psi_t(y_t,y_{t+1},x_{1:T}) \\ & = \frac{1}{\mathcal Z} \left[\sum_{y_1,\cdots y_{t-1}} \prod_{i=1}^{t-1}\psi_i(y_i,y_{i+1},x_{1:T})\right] \cdot \left[\sum_{y_{t+1},\cdots,y_T} \prod_{i=t}^{T-1} \psi_i(y_i,y_{i+1},x_{1:T})\right] \\ & = \frac{1}{\mathcal Z} \Delta_{left} \cdot \Delta_{right} \end{aligned}P(yt∣x1:T)=y1,⋯,yt−1∑yt+1,⋯,yT∑Z1t=1∏T−1ψt(yt,yt+1,x1:T)=Z1[y1,⋯yt−1∑i=1∏t−1ψi(yi,yi+1,x1:T)]⋅[yt+1,⋯,yT∑i=t∏T−1ψi(yi,yi+1,x1:T)]=Z1Δleft⋅Δright
对于链式条件随机场,无论前向还是后向,它们都可以用变量消去法(Variable Elimination,VE)的方式进行化简:
Δleft=αt(k)=∑yt−1ψt−1(yt−1,yt=k,x1:T)⋯∑y2ψ2(y2,y3,x1:T)∑y1ψ1(y1,y2,x1:T)Δright=βt(m)=∑yt+1ψt(yt=m,yt+1,x1:T)∑yt+2ψt+1(yt+1,yt+2,x1:T)⋯∑yTψT−1(yT−1,yT,x1:T)\begin{aligned} \Delta_{left} = \alpha_t(k) & = \sum_{y_{t-1}} \psi_{t-1}(y_{t-1},y_t = k,x_{1:T}) \cdots \sum_{y_2}\psi_2(y_2,y_3,x_{1:T})\sum_{y_1}\psi_1(y_1,y_2,x_{1:T}) \\ \Delta_{right} = \beta_t(m) & = \sum_{y_{t+1}} \psi_t(y_t = m,y_{t+1},x_{1:T}) \sum_{y_{t+2}} \psi_{t+1}(y_{t+1},y_{t+2},x_{1:T})\cdots \sum_{y_T} \psi_{T-1}(y_{T-1},y_T,x_{1:T}) \end{aligned}Δleft=αt(k)Δright=βt(m)=yt−1∑ψt−1(yt−1,yt=k,x1:T)⋯y2∑ψ2(y2,y3,x1:T)y1∑ψ1(y1,y2,x1:T)=yt+1∑ψt(yt=m,yt+1,x1:T)yt+2∑ψt+1(yt+1,yt+2,x1:T)⋯yT∑ψT−1(yT−1,yT,x1:T)
在条件随机场要解决的任务中介绍了学习任务本质是求解最优模型参数θ^\hat {\theta}θ^,而判别标准是P(Y∣X)\mathcal P(\mathcal Y\mid \mathcal X)P(Y∣X):
集合
X,Y\mathcal X,\mathcal YX,Y中的各样本均服从‘独立同分布’。
θ^=argmaxθP(Y∣X)=argmaxθ∏i=1NP(y(i)∣x(i))\begin{aligned} \hat {\theta} & = \mathop{\arg\max}\limits_{\theta} \mathcal P(\mathcal Y \mid \mathcal X) \\ & = \mathop{\arg\max}\limits_{\theta} \prod_{i=1}^{N} \mathcal P(y^{(i)} \mid x^{(i)}) \end{aligned}θ^=θargmaxP(Y∣X)=θargmaxi=1∏NP(y(i)∣x(i))
在建模对象的表示中,模型参数θ\thetaθ共包含两个部分:λ,η\lambda,\etaλ,η。它门分别表示转移特征函数s(yt+1,yt,x1:T)s(y_{t+1},y_t,x_{1:T})s(yt+1,yt,x1:T)和状态特征函数g(yt,x1:T)g(y_t,x_{1:T})g(yt,x1:T)的参数信息。
θ=(λ,η)T=(λ1,⋯,λM,η1,⋯,ηL)T\begin{aligned} \theta & = (\lambda,\eta)^T & = (\lambda_1,\cdots,\lambda_{\mathcal M},\eta_1,\cdots,\eta_{\mathcal L})^T \end{aligned}θ=(λ,η)T=(λ1,⋯,λM,η1,⋯,ηL)T
对应建模对象P(y∣x)\mathcal P(y \mid x)P(y∣x)表示如下:
其中
P(y∣x)\mathcal P(y \mid x)P(y∣x)中
x,yx,yx,y分别表示数据集合
X\mathcal XX中的任意样本以及对应标注信息,这里
xxx和
x1:Tx_{1:T}x1:T等价。
P(y∣x)=1Z(x1:T,λ,η)exp[λT∑t=1T−1s(yt,yt+1,x1:T)+ηT∑t=1Tg(yt,x1:T)]\mathcal P(y \mid x) = \frac{1}{\mathcal Z(x_{1:T},\lambda,\eta)} \exp \left[\lambda^T \sum_{t=1}^{T-1} s(y_{t},y_{t+1},x_{1:T}) + \eta^T \sum_{t=1}^Tg(y_t,x_{1:T})\right]P(y∣x)=Z(x1:T,λ,η)1exp[λTt=1∑T−1s(yt,yt+1,x1:T)+ηTt=1∑Tg(yt,x1:T)]
对最优参数θ^\hat {\theta}θ^进行细分,并将x(i),y(i)x^{(i)},y^{(i)}x(i),y(i)代入有:
在展开过程中加入一个
logloglog,和
极大似然估计的思路相同,
logloglog函数不影响函数的单调性,因而不影响最优参数的选择。
θ^⇒λ^,η^=argmaxλ,η∏i=1NP(y(i)∣x(i))⇒argmaxλ,η[log∏i=1NP(y(i)∣x(i))]=argmaxλ,η∑i=1NlogP(y(i)∣x(i))=argmaxλ,η∑i=1Nlog{1Z(x1:T(i),λ,η)exp[λT∑t=1T−1s(yt(i),yt+1(i),x1:T(i))+ηT∑t=1Tg(yt(i),x1:T(i))]}\begin{aligned} \hat {\theta} \Rightarrow \hat \lambda,\hat {\eta} & = \mathop{\arg\max}\limits_{\lambda,\eta} \prod_{i=1}^N \mathcal P(y^{(i)} \mid x^{(i)}) \\ & \Rightarrow \mathop{\arg\max}\limits_{\lambda,\eta} \left[\log \prod_{i=1}^N \mathcal P(y^{(i)} \mid x^{(i)})\right] \\ & = \mathop{\arg\max}\limits_{\lambda,\eta} \sum_{i=1}^N \log \mathcal P(y^{(i)} \mid x^{(i)}) \\ & = \mathop{\arg\max}\limits_{\lambda,\eta} \sum_{i=1}^N \log\left\{\frac{1}{\mathcal Z(x_{1:T}^{(i)},\lambda,\eta)} \exp \left[\lambda^T\sum_{t=1}^{T-1}s(y_t^{(i)},y_{t+1}^{(i)},x_{1:T}^{(i)}) + \eta^T \sum_{t=1}^T g(y_t^{(i)},x_{1:T}^{(i)})\right]\right\} \end{aligned}θ^⇒λ^,η^=λ,ηargmaxi=1∏NP(y(i)∣x(i))⇒λ,ηargmax[logi=1∏NP(y(i)∣x(i))]=λ,ηargmaxi=1∑NlogP(y(i)∣x(i))=λ,ηargmaxi=1∑Nlog{Z(x1:T(i),λ,η)1exp[λTt=1∑T−1s(yt(i),yt+1(i),x1:T(i))+ηTt=1∑Tg(yt(i),x1:T(i))]}
继续对上式进行展开,有:
log\loglog和
exp\expexp消掉了~
λ^,η^=argmaxλ,η∑i=1N[log1Z(x1:T(i),λ,η)+logexp(λT∑t=1T−1s(yt(i),yt+1(i),x1:T(i))+ηT∑t=1Tg(yt(i),x1:T(i)))]=argmaxλ,η∑i=1N[−logZ(x1:T(i),λ,η)+λT∑t=1T−1s(yt(i),yt+1(i),x1:T(i))+ηT∑i=1Tg(yt(i),x1:T(i))]\begin{aligned} \hat {\lambda},\hat {\eta} & = \mathop{\arg\max}\limits_{\lambda,\eta} \sum_{i=1}^N \left[\log \frac{1}{\mathcal Z(x_{1:T}^{(i)},\lambda,\eta)} + \log \exp \left(\lambda^T\sum_{t=1}^{T-1} s(y_t^{(i)},y_{t+1}^{(i)},x_{1:T}^{(i)}) + \eta^T\sum_{t=1}^T g(y_t^{(i)},x_{1:T}^{(i)})\right)\right]\\ & = \mathop{\arg\max}\limits_{\lambda,\eta} \sum_{i=1}^N\left[-\log \mathcal Z\left(x_{1:T}^{(i)},\lambda,\eta\right) + \lambda^T \sum_{t=1}^{T-1} s(y_t^{(i)},y_{t+1}^{(i)},x_{1:T}^{(i)}) + \eta^T \sum_{i=1}^T g(y_t^{(i)},x_{1:T}^{(i)}) \right] \end{aligned}λ^,η^=λ,ηargmaxi=1∑N[logZ(x1:T(i),λ,η)1+logexp(λTt=1∑T−1s(yt(i),yt+1(i),x1:T(i))+ηTt=1∑Tg(yt(i),x1:T(i)))]=λ,ηargmaxi=1∑N[−logZ(x1:T(i),λ,η)+λTt=1∑T−1s(yt(i),yt+1(i),x1:T(i))+ηTi=1∑Tg(yt(i),x1:T(i))]
定义目标函数J(λ,η,x1:T(i))\mathcal J(\lambda,\eta,x_{1:T}^{(i)})J(λ,η,x1:T(i))表示如下:
J(λ,η,x1:T(i))=∑i=1N[−logZ(x1:T(i),λ,η)+λT∑t=1T−1s(yt(i),yt+1(i),x1:T(i))+ηT∑i=1Tg(yt(i),x1:T(i))]λ^,η^=argmaxλ,ηJ(λ,η,x1:T(i))\mathcal J(\lambda,\eta,x_{1:T}^{(i)}) = \sum_{i=1}^N\left[-\log \mathcal Z\left(x_{1:T}^{(i)},\lambda,\eta\right) + \lambda^T \sum_{t=1}^{T-1} s(y_t^{(i)},y_{t+1}^{(i)},x_{1:T}^{(i)}) + \eta^T \sum_{i=1}^T g(y_t^{(i)},x_{1:T}^{(i)}) \right] \\ \hat {\lambda},\hat \eta = \mathop{\arg\max}\limits_{\lambda,\eta} \mathcal J(\lambda,\eta,x_{1:T}^{(i)})J(λ,η,x1:T(i))=i=1∑N[−logZ(x1:T(i),λ,η)+λTt=1∑T−1s(yt(i),yt+1(i),x1:T(i))+ηTi=1∑Tg(yt(i),x1:T(i))]λ^,η^=λ,ηargmaxJ(λ,η,x1:T(i))
针对 最大化问题,常用的方法:梯度上升(Gradient Ascent)。
以求解λ\lambdaλ为例,求解步骤如下:
求解梯度∇λJ(λ,η,x1:T(i))\nabla_{\lambda}\mathcal J(\lambda,\eta,x_{1:T}^{(i)})∇λJ(λ,η,x1:T(i)):
观察
J(λ,η,x1:T(i))\mathcal J(\lambda,\eta,x_{1:T}^{(i)})J(λ,η,x1:T(i)),其中
∑i=1TηTg(yt(i),x1:T(i))\sum_{i=1}^T \eta^Tg(y_t^{(i)},x_{1:T}^{(i)})∑i=1TηTg(yt(i),x1:T(i))与
λ\lambdaλ无关。同理,对
η\etaη求解梯度
∇ηJ(λ,η,x1:T(i))\nabla_{\eta}\mathcal J(\lambda,\eta,x_{1:T}^{(i)})∇ηJ(λ,η,x1:T(i))过程中,
∑t=1T−1λTs(yt(i),yt+1(i),x1:T(i))\sum_{t=1}^{T-1} \lambda^Ts(y_t^{(i)},y_{t+1}^{(i)},x_{1:T}^{(i)})∑t=1T−1λTs(yt(i),yt+1(i),x1:T(i))与
η\etaη无关,本文仅对
λ\lambdaλ进行梯度求解。
∇λJ(λ,η,x1:T(i))=∑i=1N[∑t−1T−1s(yt(i),yt+1(i),x1:T(i))−∇λlogZ(x1:T(i),λ,η)]\begin{aligned} \nabla_{\lambda}\mathcal J(\lambda,\eta,x_{1:T}^{(i)}) = \sum_{i=1}^N \left[\sum_{t-1}^{T-1} s(y_{t}^{(i)},y_{t+1}^{(i)},x_{1:T}^{(i)}) - \nabla_{\lambda} \log \mathcal Z(x_{1:T}^{(i)},\lambda,\eta)\right] \end{aligned}∇λJ(λ,η,x1:T(i))=i=1∑N[t−1∑T−1s(yt(i),yt+1(i),x1:T(i))−∇λlogZ(x1:T(i),λ,η)]
其中logZ(x1:T(i),λ,η)\log \mathcal Z(x_{1:T}^{(i)},\lambda,\eta)logZ(x1:T(i),λ,η)被称作对数配分函数(log Partition Function)。最早在指数族分布介绍中出现的概念。在指数族分布——充分统计量与模型参数关系中介绍了 对数配分函数的导数与充分统计量之间的关联关系:
A′(η)=∂A(η)∂η=∫x1eA(η)⋅h(x)eηTϕ(x)⋅ϕ(x)dx=∫xP(x∣η)⋅ϕ(x)dx=EP(x∣η)[ϕ(x)]\begin{aligned} \mathcal A'(\eta) = \frac{\partial \mathcal A(\eta)}{\partial \eta} & = \int_{x} \frac{1}{e^{\mathcal A(\eta)}} \cdot h(x)e^{\eta^T \phi(x)} \cdot \phi(x) dx \\ & = \int_{x} \mathcal P(x \mid \eta) \cdot \phi(x) dx \\ & = \mathbb E_{\mathcal P(x \mid \eta)}[\phi(x)] \end{aligned}A′(η)=∂η∂A(η)=∫xeA(η)1⋅h(x)eηTϕ(x)⋅ϕ(x)dx=∫xP(x∣η)⋅ϕ(x)dx=EP(x∣η)[ϕ(x)]
其中,A(η)\mathcal A(\eta)A(η)表示对数配分函数;ϕ(x)\phi(x)ϕ(x)表示样本xxx的充分统计量;P(x∣η)\mathcal P(x\mid \eta)P(x∣η)(也有写作P(x;η)\mathcal P(x;\eta)P(x;η)) 表示xxx服从的概率分布。
需要注意的是:
在条件随机场的学习任务中将‘对数配分函数’写成
logZ(x1:T(i),λ,η)\log \mathcal Z(x_{1:T}^{(i)},\lambda,\eta)logZ(x1:T(i),λ,η)的形式,但实际上,
x1:T(i)x_{1:T}^{(i)}x1:T(i)仅表示已知的观测变量(样本数据),并不是‘对数配分函数’的变量,变量只有
λ,η\lambda,\etaλ,η两个。
在‘建模对象’
P(y(i)∣x(i))\mathcal P(y^{(i)} \mid x^{(i)})P(y(i)∣x(i))中,某一具体样本
x(i)x^{(i)}x(i)的充分统计量该如何表示?回顾‘建模对象’
P(y∣x)\mathcal P(y \mid x)P(y∣x)的展开式
:观察,由于条件随机场依然使用‘最大熵模型’作为底层逻辑,因此能够看出,
{\{{大括号
}\}}中的项就是‘最大熵模型’的描述形式。而最大熵模型的定义式就是从‘指数族分布’的定义式演化而来。详见
最大熵定理与指数族分布的关系。因此,充分统计量就是
∑t=1T−1s(yt,yt+1,x1:T)\sum_{t=1}^{T-1} s(y_{t},y_{t+1},x_{1:T})∑t=1T−1s(yt,yt+1,x1:T).即便找到了充分统计量,并没有结束。因为我们要找的是
x1:T(i)x_{1:T}^{(i)}x1:T(i)这个样本的充分统计量。在这里自然是对‘隐状态’
yt,yt+1y_t,y_{t+1}yt,yt+1进行统计。看看
yt,yt+1y_t,y_{t+1}yt,yt+1具体代表什么:
通过上式,可以发现,统计的对象就是‘所有样本’在
t,t+1t,t+1t,t+1时刻的‘离散取值结果’,并进行统计。具体是怎么统计的,这并不是我们关注的重点。
x1:T(i)x_{1:T}^{(i)}x1:T(i)样本的‘充分统计量’表示如下:
说了这么多,这里最重要的点即:
yt,yt+1y_t,y_{t+1}yt,yt+1右上角没有样本编号,它表示所有样本
t,t+1t,t+1t,t+1时刻的隐状态集合,它和上面‘建模对象中的’
yt,yt+1y_t,y_{t+1}yt,yt+1不是一个东西。建模对象的
yt,yt+1y_t,y_{t+1}yt,yt+1中的
yyy表示样本集合
X\mathcal XX任意一个样本
xxx对应的标注序列。
将对数配分函数的导数结果带入,有:
个人理解:在整个隐状态分布下求解期望。
∇λ[logZ(x1:T(i),λ,η)]=EY[∑t=1T−1s(yt,yt+1,x1:T(i))]Y=(y1,⋯,yT)T\nabla_{\lambda} \left[\log \mathcal Z(x_{1:T}^{(i)},\lambda,\eta)\right] = \mathbb E_{\mathcal Y}\left[\sum_{t=1}^{T-1}s(y_t,y_{t+1},x_{1:T}^{(i)})\right] \quad \mathcal Y = (y_1,\cdots,y_T)^T∇λ[logZ(x1:T(i),λ,η)]=EY[t=1∑T−1s(yt,yt+1,x1:T(i))]Y=(y1,⋯,yT)T
接下来对上述期望结果进行求解:
EY[∑t=1T−1s(yt,yt+1,x1:T(i))]=∑YP(Y∣x1:T(i))[∑t=1T−1f(yt,yt+1,x1:T(i))]\begin{aligned} \mathbb E_{\mathcal Y}\left[\sum_{t=1}^{T-1}s(y_t,y_{t+1},x_{1:T}^{(i)})\right] = \sum_{\mathcal Y} \mathcal P(\mathcal Y \mid x_{1:T}^{(i)}) \left[\sum_{t=1}^{T-1}f(y_{t},y_{t+1},x_{1:T}^{(i)})\right] \end{aligned}EY[t=1∑T−1s(yt,yt+1,x1:T(i))]=Y∑P(Y∣x1:T(i))[t=1∑T−1f(yt,yt+1,x1:T(i))]
观察一下计算该式的时间复杂度:Y\mathcal YY包含y1,⋯,yTy_1,\cdots,y_Ty1,⋯,yT,并且每一个隐状态yt(t=1,2,⋯,T)y_t(t=1,2,\cdots,T)yt(t=1,2,⋯,T)包含∣K∣|\mathcal K|∣K∣个取值,因此时间复杂度为:
O(∣K∣T)⋅O(T−1)=O[(T−1)∣K∣T]O(|\mathcal K|^T) \cdot O(T-1) = O[(T-1)|\mathcal K|^T]O(∣K∣T)⋅O(T−1)=O[(T−1)∣K∣T]
可以看出,这个复杂度是指数级别的,极难求解。
观察上式,Y\mathcal YY和单个的ttt无关,因此将∑YP(Y∣x1:T(i))\sum_{\mathcal Y}\mathcal P(\mathcal Y \mid x_{1:T}^{(i)})∑YP(Y∣x1:T(i))看成整体,∑t=1T−1\sum_{t=1}^{T-1}∑t=1T−1提到前面:
∇λ[logZ(x1:T(i),λ,η)]=∑t=1T−1[∑YP(Y∣x1:T(i))s(yt,yt+1,x1:T(i))]\nabla_{\lambda}\left[\log \mathcal Z(x_{1:T}^{(i)},\lambda,\eta)\right] = \sum_{t=1}^{T-1} \left[\sum_{\mathcal Y} \mathcal P(\mathcal Y \mid x_{1:T}^{(i)}) s(y_{t},y_{t+1},x_{1:T}^{(i)})\right]∇λ[logZ(x1:T(i),λ,η)]=t=1∑T−1[Y∑P(Y∣x1:T(i))s(yt,yt+1,x1:T(i))]
将∑Y\sum_{\mathcal Y}∑Y拆成三部分:
∑Y=∑y1,⋯,yT=∑y1,⋯,yt−1∑yt,yt+1∑yt+2,⋯,yT\sum_{\mathcal Y} = \sum_{y_1,\cdots,y_T} = \sum_{y_1,\cdots,y_{t-1}} \sum_{y_{t},y_{t+1}} \sum_{y_{t+2},\cdots,y_{T}}Y∑=y1,⋯,yT∑=y1,⋯,yt−1∑yt,yt+1∑yt+2,⋯,yT∑
交换一下顺序,最终结果表示如下:
其中,中括号中的项可以使用‘概率密度积分’的方式积分掉
Y\mathcal YY中的
y1,⋯,yt−1,yt+2,⋯,yTy_1,\cdots,y_{t-1},y_{t+2},\cdots,y_Ty1,⋯,yt−1,yt+2,⋯,yT,只剩下
yt,yt+1y_{t},y_{t+1}yt,yt+1两项
:
∑y1,⋯,yt−1∑yt+2,⋯,yTP(Y∣x1:T(i))=∑y1,⋯,yt−1∑yt+2,⋯,yTP(y1,y2,⋯,yT∣x1:T(i))=P(yt,yt+1∣x1:T(i))\begin{aligned} \sum_{y_1,\cdots,y_{t-1}}\sum_{y_{t+2},\cdots,y_{T}}\mathcal P(\mathcal Y\mid x_{1:T}^{(i)}) & = \sum_{y_1,\cdots,y_{t-1}}\sum_{y_{t+2},\cdots,y_{T}}\mathcal P(y_1,y_2,\cdots,y_T\mid x_{1:T}^{(i)}) \\ & = \mathcal P(y_{t},y_{t+1} \mid x_{1:T}^{(i)}) \end{aligned}y1,⋯,yt−1∑yt+2,⋯,yT∑P(Y∣x1:T(i))=y1,⋯,yt−1∑yt+2,⋯,yT∑P(y1,y2,⋯,yT∣x1:T(i))=P(yt,yt+1∣x1:T(i))
此时发现,化简后的结果就是关于
yt,yt+1y_t,y_{t+1}yt,yt+1的边缘概率分布(
x1:T(i)x_{1:T}^{(i)}x1:T(i)不是变量,是已知信息)。
∇λ[logZ(x1:T(i),λ,η)]=∑t=1T−1∑yt,yt+1[∑y1,⋯,yt−1∑yt+2,⋯,yTP(Y∣x1:T(i))⋅s(yt,yt+1,x1:T(i))]=∑t=1T−1∑yt,yt+1P(yt,yt+1∣x1:T(i))⋅s(yt,yt+1,x1:T(i))\begin{aligned} \nabla_{\lambda}\left[\log \mathcal Z(x_{1:T}^{(i)},\lambda,\eta)\right] & = \sum_{t=1}^{T-1} \sum_{y_{t},y_{t+1}} \left[\sum_{y_1,\cdots,y_{t-1}}\sum_{y_{t+2},\cdots,y_{T}}\mathcal P(\mathcal Y\mid x_{1:T}^{(i)}) \cdot s(y_{t},y_{t+1},x_{1:T}^{(i)})\right] \\ & = \sum_{t=1}^{T-1} \sum_{y_{t},y_{t+1}} \mathcal P(y_t,y_{t+1} \mid x_{1:T}^{(i)}) \cdot s(y_{t},y_{t+1},x_{1:T}^{(i)}) \end{aligned}∇λ[logZ(x1:T(i),λ,η)]=t=1∑T−1yt,yt+1∑[y1,⋯,yt−1∑yt+2,⋯,yT∑P(Y∣x1:T(i))⋅s(yt,yt+1,x1:T(i))]=t=1∑T−1yt,yt+1∑P(yt,yt+1∣x1:T(i))⋅s(yt,yt+1,x1:T(i))
观察上式,上式已经被化简成只包含yt,yt+1y_t,y_{t+1}yt,yt+1两个变量的式子。sss是定义的特征函数,可求;P(yt,yt+1∣x1:T(i))\mathcal P(y_t,y_{t+1} \mid x_{1:T}^{(i)})P(yt,yt+1∣x1:T(i))使用前向后向算法进行求解。对应概率图表示如下:
由于求解两个隐状态整体的边缘分布,需要空出一个团来。
对应公式表示如下:
P(yt,yt+1∣x1:T(i))=∑y1,⋯,yt−1∑yt+2,⋯,yT1Z∏t=1T−1ψt(yt,yt+1,x1:T(i))=1Z{[∑y1,⋯,yt−1ψ1(y1,y2,x1:T(i))⋯ψt−1(yt−1,yt,x1:T(i))]⋅ψt(yt,yt+1,x1:T(i))⋅[∑yt+2,⋯,yTψt+1(yt+1,yt+2,x1:T(i))⋯ψT−1(yT−1,yT,x1:T(i))]}=1Z(Δleft⋅ψt(yt,yt+1,x1:T(i))⋅Δright){Δleft=∑y1ψ1(y1,y2,x1:T(i))⋯∑yt−1ψt−1(yt−1,yt,x1:T(i))Δright=∑tt+2ψt+1(yt+1,yt+2,x1:T(i))⋯∑yTψT−1(yT−1,yT,x1:T(i))\begin{aligned} & \mathcal P(y_t,y_{t+1} \mid x_{1:T}^{(i)}) \\ & = \sum_{y_1,\cdots,y_{t-1}} \sum_{y_{t+2},\cdots,y_T} \frac{1}{\mathcal Z} \prod_{t=1}^{T-1}\psi_t(y_t,y_{t+1},x_{1:T}^{(i)}) \\ & = \frac{1}{\mathcal Z}\left\{\left[\sum_{y_1,\cdots,y_{t-1}} \psi_1(y_1,y_2,x_{1:T}^{(i)}) \cdots \psi_{t-1}(y_{t-1},y_t,x_{1:T}^{(i)})\right] \cdot \psi_{t}(y_t,y_{t+1},x_{1:T}^{(i)}) \cdot \left[\sum_{y_{t+2},\cdots,y_T}\psi_{t+1}(y_{t+1},y_{t+2},x_{1:T}^{(i)}) \cdots \psi_{T-1}(y_{T-1},y_{T},x_{1:T}^{(i)})\right]\right\} \\ & = \frac{1}{\mathcal Z} \left(\Delta_{left} \cdot \psi_{t}(y_t,y_{t+1},x_{1:T}^{(i)}) \cdot \Delta_{right}\right) \begin{cases} \Delta_{left} = \sum_{y_1}\psi_1(y_1,y_2,x_{1:T}^{(i)})\cdots \sum_{y_{t-1}} \psi_{t-1}(y_{t-1},y_t,x_{1:T}^{(i)}) \\ \Delta_{right} = \sum_{t_{t+2}} \psi_{t+1}(y_{t+1},y_{t+2},x_{1:T}^{(i)}) \cdots\sum_{y_{T}} \psi_{T-1}(y_{T-1},y_T,x_{1:T}^{(i)}) \end{cases} \end{aligned}P(yt,yt+1∣x1:T(i))=y1,⋯,yt−1∑yt+2,⋯,yT∑Z1t=1∏T−1ψt(yt,yt+1,x1:T(i))=Z1{[y1,⋯,yt−1∑ψ1(y1,y2,x1:T(i))⋯ψt−1(yt−1,yt,x1:T(i))]⋅ψt(yt,yt+1,x1:T(i))⋅[yt+2,⋯,yT∑ψt+1(yt+1,yt+2,x1:T(i))⋯ψT−1(yT−1,yT,x1:T(i))]}=Z1(Δleft⋅ψt(yt,yt+1,x1:T(i))⋅Δright){Δleft=∑y1ψ1(y1,y2,x1:T(i))⋯∑yt−1ψt−1(yt−1,yt,x1:T(i))Δright=∑tt+2ψt+1(yt+1,yt+2,x1:T(i))⋯∑yTψT−1(yT−1,yT,x1:T(i))
设置αt(k)=Δleft(yt=k),βt+1(j)=Δright(yt+1=j)\alpha_t(k) = \Delta_{left}(y_t = k),\beta_{t+1}(j) = \Delta_{right}(y_{t+1} = j)αt(k)=Δleft(yt=k),βt+1(j)=Δright(yt+1=j),则有:
P(yt,yt+1∣x1:T(i))=1Zαt(k)⋅ψt(yt,yt+1,x1:T(i))⋅βt+1(j)\mathcal P(y_t,y_{t+1} \mid x_{1:T}^{(i)}) = \frac{1}{\mathcal Z}\alpha_t(k) \cdot \psi_{t}(y_t,y_{t+1},x_{1:T}^{(i)}) \cdot\beta_{t+1}(j)P(yt,yt+1∣x1:T(i))=Z1αt(k)⋅ψt(yt,yt+1,x1:T(i))⋅βt+1(j)
至此,关于λ\lambdaλ的梯度结果∇λ[logZ(x1:T(i),λ,η)]\nabla_{\lambda}\left[\log \mathcal Z(x_{1:T}^{(i)},\lambda,\eta)\right]∇λ[logZ(x1:T(i),λ,η)]求解过程没有障碍:
∇λ[logZ(x1:T(i),λ,η)]=∑t=1T−1∑yt,yt+1P(yt,yt+1∣x1:T(i))⋅s(yt,yt+1,x1:T(i))=1Z∑t=1T−1∑yt,yt+1αt(k)⋅ψt(yt,yt+1,x1:T(i))⋅βt+1(j)⋅s(yt,yt+1,x1:T(i))\begin{aligned} \nabla_{\lambda}\left[\log \mathcal Z(x_{1:T}^{(i)},\lambda,\eta)\right] & = \sum_{t=1}^{T-1} \sum_{y_{t},y_{t+1}} \mathcal P(y_t,y_{t+1} \mid x_{1:T}^{(i)}) \cdot s(y_{t},y_{t+1},x_{1:T}^{(i)}) \\ & = \frac{1}{\mathcal Z}\sum_{t=1}^{T-1} \sum_{y_t,y_{t+1}} \alpha_t(k) \cdot \psi_{t}(y_t,y_{t+1},x_{1:T}^{(i)}) \cdot\beta_{t+1}(j) \cdot s(y_{t},y_{t+1},x_{1:T}^{(i)}) \end{aligned}∇λ[logZ(x1:T(i),λ,η)]=t=1∑T−1yt,yt+1∑P(yt,yt+1∣x1:T(i))⋅s(yt,yt+1,x1:T(i))=Z1t=1∑T−1yt,yt+1∑αt(k)⋅ψt(yt,yt+1,x1:T(i))⋅βt+1(j)⋅s(yt,yt+1,x1:T(i))
最终关于模型参数λ\lambdaλ的梯度∇λJ(λ,η,x1:T(i))\nabla_{\lambda}\mathcal J(\lambda,\eta,x_{1:T}^{(i)})∇λJ(λ,η,x1:T(i))结果表示如下:
∇λJ(λ,η,x1:T(i))=∑i=1N[∑t−1T−1s(yt(i),yt+1(i),x1:T(i))−∇λlogZ(x1:T(i),λ,η)]=∑i=1N[∑t−1T−1s(yt(i),yt+1(i),x1:T(i))−1Z∑t=1T−1∑yt,yt+1αt(k)⋅ψt(yt,yt+1,x1:T(i))⋅βt+1(j)⋅s(yt,yt+1,x1:T(i))]\begin{aligned} \nabla_{\lambda}\mathcal J(\lambda,\eta,x_{1:T}^{(i)}) & = \sum_{i=1}^N \left[\sum_{t-1}^{T-1} s(y_{t}^{(i)},y_{t+1}^{(i)},x_{1:T}^{(i)}) - \nabla_{\lambda} \log \mathcal Z(x_{1:T}^{(i)},\lambda,\eta)\right] \\ & = \sum_{i=1}^N \left[\sum_{t-1}^{T-1} s(y_{t}^{(i)},y_{t+1}^{(i)},x_{1:T}^{(i)}) - \frac{1}{\mathcal Z}\sum_{t=1}^{T-1} \sum_{y_t,y_{t+1}} \alpha_t(k) \cdot \psi_{t}(y_t,y_{t+1},x_{1:T}^{(i)}) \cdot\beta_{t+1}(j) \cdot s(y_{t},y_{t+1},x_{1:T}^{(i)})\right] \end{aligned}∇λJ(λ,η,x1:T(i))=i=1∑N[t−1∑T−1s(yt(i),yt+1(i),x1:T(i))−∇λlogZ(x1:T(i),λ,η)]=i=1∑N[t−1∑T−1s(yt(i),yt+1(i),x1:T(i))−Z1t=1∑T−1yt,yt+1∑αt(k)⋅ψt(yt,yt+1,x1:T(i))⋅βt+1(j)⋅s(yt,yt+1,x1:T(i))]
确定了模型参数λ\lambdaλ的迭代方向,可以通过迭代逼近最优解:
这里就不复述
η\etaη的求解过程了。有感兴趣的小伙伴留言交流。
λ(k+1)=λk+dλ⋅∇λJ(λ(k),η(k),x1:T(i))\lambda^{(k+1)} = \lambda^{k} + d_{\lambda} \cdot \nabla_{\lambda} \mathcal J(\lambda^{(k)},\eta^{(k)},x_{1:T}^{(i)})λ(k+1)=λk+dλ⋅∇λJ(λ(k),η(k),x1:T(i))
回顾求解模型参数的流程:
至此,条件随机场的相关信息介绍完毕,下一节将返回高斯分布,介绍一些关于高斯分布的基础逻辑。
相关参考:
机器学习-条件随机场(8)-CRF模型-Learning-参数估计
下一篇:数据库连接池