700字范文,内容丰富有趣,生活中的好帮手!
700字范文 > 隐含狄利克雷分布LDA与python实践

隐含狄利克雷分布LDA与python实践

时间:2018-08-15 16:10:02

相关推荐

隐含狄利克雷分布LDA与python实践

1. 预备知识

预备知识包含共轭先验分布、Dirichlet分布等。

(1) 共轭先验分布

随机变量的分布

P ( x ≤ X ( k ) ≤ x + Δ x ) = n C n − 1 k − 1 ∏ i = 1 n P ( X i ) = n C n − 1 k − 1 x k − 1 ( 1 − x − Δ x ) n − k Δ x = n C n − 1 k − 1 x k − 1 ( 1 − x ) n − k Δ x + o ( Δ x ) \begin{array}{l} P\left(x \leq X_{(k)} \leq x+\Delta x\right) \\ {= n C_{n-1}^{k-1}\prod_{i=1}^{n} P\left(X_{i}\right)} \\ {=n C_{n-1}^{k-1} x^{k-1}(1-x-\Delta x)^{n-k} \Delta x} \\ {=n C_{n-1}^{k-1}x^{k-1}(1-x)^{n-k} \Delta x+o(\Delta x)}\end{array} P(x≤X(k)​≤x+Δx)=nCn−1k−1​∏i=1n​P(Xi​)=nCn−1k−1​xk−1(1−x−Δx)n−kΔx=nCn−1k−1​xk−1(1−x)n−kΔx+o(Δx)​

X(k)的概率密度函数

f ( x ) = lim ⁡ Δ x → 0 P ( x ≤ X ( k ) ≤ x + Δ x ) Δ x = n ( n − 1 k − 1 ) x k − 1 ( 1 − x ) n − k = n ! ( k − 1 ) ! ( n − k ) ! x k − 1 ( 1 − x ) n − k x ∈ [ 0 , 1 ] \begin{aligned} f(x) &=\lim _{\Delta x \rightarrow 0} \frac{P\left(x \leq X_{(k)} \leq x+\Delta x\right)}{\Delta x} \\ &=n\left(\begin{array}{c}{n-1} \\ {k-1}\end{array}\right) x^{k-1}(1-x)^{n-k} \\ &=\frac{n !}{(k-1) !(n-k) !} x^{k-1}(1-x)^{n-k} \quad x \in[0,1] \end{aligned} f(x)​=Δx→0lim​ΔxP(x≤X(k)​≤x+Δx)​=n(n−1k−1​)xk−1(1−x)n−k=(k−1)!(n−k)!n!​xk−1(1−x)n−kx∈[0,1]​

(2) Γ函数

Γ函数是阶乘在实数上的推广

Γ ( x ) = ∫ 0 ∞ t x − 1 e − t d t Γ ( x + 1 ) = x Γ ( x ) Γ ( n ) = ( n − 1 ) ! \begin{aligned} \Gamma(x) &=\int_{0}^{\infty} t^{x-1} e^{-t} d t \\ \Gamma(x+1) &=x \Gamma(x) \\ \Gamma(n) &=(n-1) ! \end{aligned} Γ(x)Γ(x+1)Γ(n)​=∫0∞​tx−1e−tdt=xΓ(x)=(n−1)!​

因此,X(k) 概率密度函数可改写为:

f ( x ) = Γ ( n + 1 ) Γ ( k ) Γ ( n − k + 1 ) x k − 1 ( 1 − x ) n − k α = k , β = n − k + 1 f ( x ) = Γ ( α + β ) Γ ( α ) Γ ( β ) x α − 1 ( 1 − x ) β − 1 \begin{aligned} f(x) &=\frac{\Gamma(n+1)}{\Gamma(k) \Gamma(n-k+1)} x^{k-1}(1-x)^{n-k} \\ \alpha &=k, \beta=n-k+1 \\ f(x) &=\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha) \Gamma(\beta)} x^{\alpha-1}(1-x)^{\beta-1} \end{aligned} f(x)αf(x)​=Γ(k)Γ(n−k+1)Γ(n+1)​xk−1(1−x)n−k=k,β=n−k+1=Γ(α)Γ(β)Γ(α+β)​xα−1(1−x)β−1​

(3) Beta分布

Beta分布的概率密度图曲线:

(4)共轭分布

思考过程:

如果后验分布先验分布属于同类,则先验分布后验分布(?)被称为共轭分布,而先验分布被称为似然函数的共轭先验

也就是, beta分布是二项分布的共轭先验分布。

先验分布 +数据知识 = 后验分布

共轭先验分布

在贝叶斯概率理论中,如果后验概率P(θ|x)和先验概率p(θ)满足同样的分布律,那么,先验分布和后验分布被叫做共轭分布,同时,先验分布叫做似然函数的共轭先验分布

(5)Dirichlet分布

狄利克雷分布是一组连续多变量概率分布,是多变量普遍化的Β分布。

Dirichlet分布的定义:

p ( p ⃗ ∣ α ⃗ ) = Dir ⁡ ( p ⃗ ∣ α ⃗ ) ≜ Γ ( ∑ k = 1 K α k ) ∏ k = 1 K Γ ( α k ) ∏ k = 1 K p k α k − 1 ≜ 1 Δ ( α ⃗ ) ∏ k = 1 K p k α k − 1 \begin{aligned} & p(\vec{p} | \vec{\alpha})=\operatorname{Dir}(\vec{p} | \vec{\alpha}) \\ \triangleq & \frac{\Gamma\left(\sum_{k=1}^{K} \alpha_{k}\right)}{\prod_{k=1}^{K} \Gamma\left(\alpha_{k}\right)} \prod_{k=1}^{K} p_{k}^{\alpha_{k}-1} \\ \triangleq & \frac{1}{\Delta(\vec{\alpha})} \prod_{k=1}^{K} p_{k}^{\alpha_{k}-1} \end{aligned} ≜≜​p(p ​∣α )=Dir(p ​∣α )∏k=1K​Γ(αk​)Γ(∑k=1K​αk​)​k=1∏K​pkαk​−1​Δ(α )1​k=1∏K​pkαk​−1​​

Δ ( α ⃗ ) = ∏ k = 1 dim ⁡ α ⃗ Γ ( α k ) Γ ( ∑ k = 1 dim ⁡ α ⃗ α k ) \Delta(\vec{\alpha})=\frac{\prod_{k=1}^{\operatorname{dim} \vec{\alpha}} \Gamma\left(\alpha_{k}\right)}{\Gamma\left(\sum_{k=1}^{\operatorname{dim} \vec{\alpha}} \alpha_{k}\right)} Δ(α )=Γ(∑k=1dimα ​αk​)∏k=1dimα ​Γ(αk​)​

多项分布的共轭分布是Dirichlet分布。

参数α对Dirichlet分布的影响:

(6)吉布斯采样

Gibbs Sampling算法的运行方式是每次选取概率向量的一个维度,给定其他维度的变量值采样当前维度的值。不断迭代,直到收敛输出待估计的参数。

2. LDA理论

隐含狄利克雷分布(英语:Latent Dirichlet allocation,简称LDA),是一种主题模型,它可以将文档集中每篇文档的主题按照概率分布的形式给出。同时它是一种无监督学习算法,在训练时不需要手工标注的训练集,需要的仅仅是文档集以及指定主题的数量k即可。此外LDA的另一个优点则是,对于每一个主题均可找出一些词语来描述它。

LDA贝叶斯网络结构:

3. LDA实践

def LDAEstimator(data, nClass = 22, ):print("using LDA...")lda = LatentDirichletAllocation(n_topics= nClass,learning_offset=50.,random_state=666)docres = lda.fit_transform(data)return lda, docres

或者使用:

doc1 = "Sugar is bad to consume. My sister likes to have sugar, but not my father."doc2 = "My father spends a lot of time driving my sister around to dance practice."doc3 = "Doctors suggest that driving may cause increased stress and blood pressure."doc4 = "Sometimes I feel pressure to perform well at school, but my father never seems to drive my sister to do better."doc5 = "Health experts say that Sugar is not good for your lifestyle."doc_complete = [doc1, doc2, doc3, doc4, doc5]import nltknltk.download('stopwords')nltk.download("wordnet")from nltk.corpus import stopwords from nltk.stem.wordnet import WordNetLemmatizerimport stringstop = set(stopwords.words('english'))exclude = set(string.punctuation) lemma = WordNetLemmatizer()def clean(doc):stop_free = " ".join([i for i in doc.lower().split() if i not in stop])punc_free = ''.join(ch for ch in stop_free if ch not in exclude)normalized = " ".join(lemma.lemmatize(word) for word in punc_free.split())return normalizedimport gensimfrom gensim import corporadoc_clean = [clean(doc).split() for doc in doc_complete] dictionary = corpora.Dictionary(doc_clean )doc_term_matrix = [dictionary.doc2bow(doc) for doc in doc_clean]Lda = gensim.models.ldamodel.LdaModelldamodel = Lda(doc_term_matrix, num_topics=3, id2word = dictionary, passes=50)

参考:

towards LDA;sklearn LDA;LDA数学八卦;论文 Parameter estimation for text analysis;

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。