GMM——对数似然并不单调递增。

3
昨天我使用期望最大化算法实现了GMM(高斯混合模型)。
如你所记,它将某个未知分布建模为高斯混合物,我们需要学习其每个高斯分量的均值和方差以及权重。
这是代码背后的数学原理(并不复杂)http://mccormickml.com/2014/08/04/gaussian-mixture-models-tutorial-and-matlab-code/
这是我的代码:
import numpy as np
from scipy.stats import multivariate_normal
import matplotlib.pyplot as plt

#reference for this code is http://mccormickml.com/2014/08/04/gaussian-mixture-models-tutorial-and-matlab-code/

def expectation(data, means, covs, priors): #E-step. returns the updated probabilities
    m = data.shape[0]                       #gets the data, means covariances and priors of all clusters
    numOfClusters = priors.shape[0]

    probabilities = np.zeros((m, numOfClusters))
    for i in range(0, m):
        for j in range(0, numOfClusters):
            sum = 0
            for l in range(0, numOfClusters):
                sum += normalPDF(data[i, :], means[l], covs[l]) * priors[l, 0]
            probabilities[i, j] = normalPDF(data[i, :], means[j], covs[j]) * priors[j, 0] / sum

    return probabilities

def maximization(data, probabilities): #M-step. this updates the means, covariances, and priors of all clusters
    m, n = data.shape
    numOfClusters = probabilities.shape[1]

    means = np.zeros((numOfClusters, n))
    covs = np.zeros((numOfClusters, n, n))
    priors = np.zeros((numOfClusters, 1))

    for i in range(0, numOfClusters):
        priors[i, 0] = np.sum(probabilities[:, i]) / m #update priors

        for j in range(0, m): #update means
            means[i] += probabilities[j, i] * data[j, :]

            vec = np.reshape(data[j, :] - means[i, :], (n, 1))
            covs[i] += probabilities[j, i] * np.dot(vec, vec.T) #update covs

        means[i] /= np.sum(probabilities[:, i])
        covs[i] /= np.sum(probabilities[:, i])

    return [means, covs, priors]

def normalPDF(x, mean, covariance): #this is simply multivariate normal pdf
    n = len(x)

    mean = np.reshape(mean, (n, ))
    x = np.reshape(x, (n, ))

    var = multivariate_normal(mean=mean, cov=covariance,)
    return var.pdf(x)


def initClusters(numOfClusters, data): #initialize all the gaussian clusters (means, covariances, priors
    m, n = data.shape

    means = np.zeros((numOfClusters, n))
    covs = np.zeros((numOfClusters, n, n))
    priors = np.zeros((numOfClusters, 1))

    initialCovariance = np.cov(data.T)

    for i in range(0, numOfClusters):
        means[i] = np.random.rand(n) #the initial mean for each gaussian is chosen randomly
        covs[i] = initialCovariance #the initial covariance of each cluster is the covariance of the data
        priors[i, 0] = 1.0 / numOfClusters #the initial priors are uniformly distributed.

    return [means, covs, priors]

def logLikelihood(data, probabilities): #data is our data. probabilities[i, j] = k means probability example i belongs in cluster j is 0 < k < 1
    m = data.shape[0] #num of examples

    examplesByCluster = np.zeros((m, 1))
    for i in range(0, m):
        examplesByCluster[i, 0] = np.argmax(probabilities[i, :])
    examplesByCluster = examplesByCluster.astype(int) #examplesByCluster[i] = j means that example i belongs in cluster j

    result = 0
    for i in range(0, m):
        result += np.log(probabilities[i, examplesByCluster[i, 0]]) #example i belongs in cluster examplesByCluster[i, 0]

    return result

m = 2000 #num of training examples
n = 8 #num of features for each example

data = np.random.rand(m, n)
numOfClusters = 2 #num of gaussians
numIter = 30 #num of iterations of EM
cost = np.zeros((numIter, 1))

[means, covs, priors] = initClusters(numOfClusters, data)

for i in range(0, numIter):
    probabilities = expectation(data, means, covs, priors)
    [means, covs, priors] = maximization(data, probabilities)

    cost[i, 0] = logLikelihood(data, probabilities)

plt.plot(cost)
plt.show()

问题在于对数似然的行为很奇怪。我希望它是单调递增的,但事实并非如此。
例如,使用8个特征和3个高斯聚类的2000个示例,对数似然看起来像这样(30次迭代)-

enter image description here

所以这很糟糕。但是在我进行的其他测试中,例如一个包含2个特征和2个聚类的15个示例的测试,对数似然值为 -

enter image description here

进步了但还不完美。

为什么会发生这种情况,我该如何解决?


1
你试图建模哪些数据?从代码中看,你正在对随机点进行建模,即数据中没有任何结构可言。如果是这种情况,你的GMM模型可能只是在随机波动。 - etov
在这种情况下,它是随机的,但将来它可以是任何类型的数据,从温度到车辆传感器读数,任何东西。我认为数据是随机的并不重要。理论上讲,我们确保单调收敛。即使在随机数据上也是如此。 - Oria Gruber
你尝试过将你的结果与已知可行的实现结果进行比较吗?其中一个选择是来自scikit-learn的GaussianMixture。链接 - René
我有一个来自scikit learn的函数,它在相同的数据集上是单调递增的。 - Oria Gruber
1个回答

4
问题在于最大化步骤。
代码使用“均值”来计算“协方差”。然而,这是在同一个循环中完成的,在将“均值”除以概率总和之前。
这会导致估计的协方差爆炸。
以下是建议的修复方法:
def maximization(data, probabilities): #M-step. this updates the means, covariances, and priors of all clusters
    m, n = data.shape
    numOfClusters = probabilities.shape[1]

    means = np.zeros((numOfClusters, n))
    covs = np.zeros((numOfClusters, n, n))
    priors = np.zeros((numOfClusters, 1))

    for i in range(0, numOfClusters):
        priors[i, 0] = np.sum(probabilities[:, i]) / m   #update priors

        for j in range(0, m): #update means
            means[i] += probabilities[j, i] * data[j, :]

        means[i] /= np.sum(probabilities[:, i])

    for i in range(0, numOfClusters):
        for j in range(0, m): #update means
            vec = np.reshape(data[j, :] - means[i, :], (n, 1))
            covs[i] += probabilities[j, i] * np.multiply(vec, vec.T) #update covs

        covs[i] /= np.sum(probabilities[:, i])

    return [means, covs, priors]

以下是包含四个特征的200个数据点的成本函数:

Cost function

编辑: 我曾经认为这个错误是代码中唯一的问题,但是在运行了一些额外的示例之后,我仍然有时会看到非单调的行为(尽管比以前不那么不稳定)。所以这似乎只是问题的一部分。

编辑2: 协方差计算中还有另一个问题:向量乘法应该是逐元素的,而不是点积 - 要记住结果应该是一个向量。现在结果似乎是一致单调递增的。


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接