Scikit-learn:K-Means和MiniBatchKMeans聚类算法的比较

4

我正在阅读Clustering的scikit-learn用户指南。他们有一个比较K-Means和MiniBatchKMeans的例子。

我对示例中的以下代码有些困惑:

# We want to have the same colors for the same cluster from the
# MiniBatchKMeans and the KMeans algorithm. Let's pair the cluster centers per
# closest one.
k_means_cluster_centers = np.sort(k_means.cluster_centers_, axis=0)
mbk_means_cluster_centers = np.sort(mbk.cluster_centers_, axis=0)
k_means_labels = pairwise_distances_argmin(X, k_means_cluster_centers)
mbk_means_labels = pairwise_distances_argmin(X, mbk_means_cluster_centers)
order = pairwise_distances_argmin(k_means_cluster_centers,
                                  mbk_means_cluster_centers)

在排序前后,k-means聚类中心的值为:

k_means.cluster_centers_
array([[ 1.07705469, -1.06730994],
       [-1.07159013, -1.00648645],
       [ 0.96700708,  1.01837274]])

k_means_cluster_centers
array([[-1.07159013, -1.06730994],
       [ 0.96700708, -1.00648645],
       [ 1.07705469,  1.01837274]])

有三个中心,所以我猜每一行都是一个中心的xy坐标。 我不确定为什么他们在将每个点与最近的中心配对之前使用np.sort(),因为这会扭曲中心的x/y坐标。也许他们只是尝试按x或y轴排序?

我在GitHub上创建了一个问题(https://github.com/scikit-learn/scikit-learn/issues/14504)。让我们看看会发生什么... - MB-F
似乎在Github上的文件已经被更正,但是网站仍然显示了错误版本,其中包含np.sort。我偶然发现了这个帖子,因为我在尝试上面链接中概述的kmeans方法时得到了混淆的结果,并对np.sort产生了疑问。 - Johann
2个回答

1

我不确定为什么我们在这里使用np.sort()。

答案在注释中 - 然而,它的实现方式存在一个错误,请见下文。

# We want to have the same colors for the same cluster from the
# MiniBatchKMeans and the KMeans algorithm. Let's pair the cluster centers per
# closest one.

在示例代码中,配对是在下面两行完成的:
k_means_cluster_centers = np.sort(k_means.cluster_centers_, axis=0)
mbk_means_cluster_centers = np.sort(mbk.cluster_centers_, axis=0)
(...)
mbk_means_cluster_centers)
order = pairwise_distances_argmin(k_means_cluster_centers,
                                  mbk_means_cluster_centers)

在代码中,order 被有效地用作查找表,以获取 k_means_cluster_centers 对应的 mbk_means_cluster_centers 中的聚类。
my_members = mbk_means_labels == order[k]
cluster_center = mbk_means_cluster_centers[order[k]]

它扭曲了计算出的聚类中心的坐标。
(根据评论中的讨论更新)
事实上,通过使用np.sort(..., axis=0),中心坐标会混合。正确的排序方式应该使用np.lexsort,像这样。
arr = k_means.cluster_centers_
k_means_cluster_centers = arr[np.lexsort((arr[:, 0], arr[:, 1]))]

arr = mbk.cluster_centers_
mbk_means_cluster_center = arr[np.lexsort((arr[:, 0], arr[:, 1]))]

实际上,这改变了示例的结果:

使用 sort(..., axis=0)

enter image description here

使用 np.lexsort

enter image description here


1
例如,一个中心是[1.07705469,-1.06730994]。经过np.sort()后,它变成了[1.07705469,1.01837274]。 - Tony416
1
问题在于np.sort对坐标进行独立排序。也就是说,它不保留哪些xy坐标属于同一个点的信息。 - MB-F
确实。我同意这没有任何意义,是代码中的一个错误。看起来是在这个提交中引入的 https://github.com/scikit-learn/scikit-learn/commit/ad758d20699df6baac6ec0a1e7e2b6c47f32a814 - miraculixx

1
我认为你是正确的。像这个例子中所做的排序混淆了点的xy坐标。它在这个例子中能够工作,更多或少是巧合。
我们有x坐标[1, -1, 1]y坐标[1, -1, -1]。排序后它们变成了[-1,1,1][-1,-1,1],它们组成了与我们最初相同的三个对:
# original | sorted
# [ 1, -1] | [-1, -1]
# [-1, -1] | [ 1, -1]
# [ 1,  1] | [ 1,  1]

请看下方,当使用四个集群时会发生什么。在这种情况下,我们有:
# original | sorted
# [-1, -1] | [-1, -1]
# [-1,  1] | [-1, -1]
# [ 1, -1] | [ 1,  1]
# [ 1,  1] | [ 1,  1]

这些点是不同的。

enter image description here

修改后的示例代码:

print(__doc__)

import time

import numpy as np
import matplotlib.pyplot as plt

from sklearn.cluster import MiniBatchKMeans, KMeans
from sklearn.metrics.pairwise import pairwise_distances_argmin
from sklearn.datasets.samples_generator import make_blobs

# #############################################################################
# Generate sample data
np.random.seed(0)

batch_size = 45
centers = [[1, 1], [-1, -1], [1, -1], [-1, 1]]
n_clusters = len(centers)
X, labels_true = make_blobs(n_samples=3000, centers=centers, cluster_std=0.7)

# #############################################################################
# Compute clustering with Means

k_means = KMeans(init='k-means++', n_clusters=4, n_init=10)
t0 = time.time()
k_means.fit(X)
t_batch = time.time() - t0

# #############################################################################
# Compute clustering with MiniBatchKMeans

mbk = MiniBatchKMeans(init='k-means++', n_clusters=4, batch_size=batch_size,
                      n_init=10, max_no_improvement=10, verbose=0)
t0 = time.time()
mbk.fit(X)
t_mini_batch = time.time() - t0

# #############################################################################
# Plot result

fig = plt.figure(figsize=(8, 3))
fig.subplots_adjust(left=0.02, right=0.98, bottom=0.05, top=0.9)
colors = ['#4EACC5', '#FF9C34', '#4E9A06', '#123456']

# We want to have the same colors for the same cluster from the
# MiniBatchKMeans and the KMeans algorithm. Let's pair the cluster centers per
# closest one.
k_means_cluster_centers = np.sort(k_means.cluster_centers_, axis=0)
mbk_means_cluster_centers = np.sort(mbk.cluster_centers_, axis=0)
k_means_labels = pairwise_distances_argmin(X, k_means_cluster_centers)
mbk_means_labels = pairwise_distances_argmin(X, mbk_means_cluster_centers)
order = pairwise_distances_argmin(k_means_cluster_centers,
                                  mbk_means_cluster_centers)

# KMeans
ax = fig.add_subplot(1, 3, 1)
for k, col in zip(range(n_clusters), colors):
    my_members = k_means_labels == k
    cluster_center = k_means_cluster_centers[k]
    ax.plot(X[my_members, 0], X[my_members, 1], 'w',
            markerfacecolor=col, marker='.')
    ax.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col,
            markeredgecolor='k', markersize=6)
ax.set_title('KMeans')
ax.set_xticks(())
ax.set_yticks(())
plt.text(-3.5, 1.8,  'train time: %.2fs\ninertia: %f' % (
    t_batch, k_means.inertia_))

# MiniBatchKMeans
ax = fig.add_subplot(1, 3, 2)
for k, col in zip(range(n_clusters), colors):
    my_members = mbk_means_labels == order[k]
    cluster_center = mbk_means_cluster_centers[order[k]]
    ax.plot(X[my_members, 0], X[my_members, 1], 'w',
            markerfacecolor=col, marker='.')
    ax.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col,
            markeredgecolor='k', markersize=6)
ax.set_title('MiniBatchKMeans')
ax.set_xticks(())
ax.set_yticks(())
plt.text(-3.5, 1.8, 'train time: %.2fs\ninertia: %f' %
         (t_mini_batch, mbk.inertia_))

# Initialise the different array to all False
different = (mbk_means_labels == 4)
ax = fig.add_subplot(1, 3, 3)

for k in range(n_clusters):
    different += ((k_means_labels == k) != (mbk_means_labels == order[k]))

identic = np.logical_not(different)
ax.plot(X[identic, 0], X[identic, 1], 'w',
        markerfacecolor='#bbbbbb', marker='.')
ax.plot(X[different, 0], X[different, 1], 'w',
        markerfacecolor='m', marker='.')
ax.set_title('Difference')
ax.set_xticks(())
ax.set_yticks(())

plt.show()

更合适的排序可能如下:

# order cluster centers by their x and y coordinates, weighted by 1 and 0.1 respectively
k_order = np.argsort(k_means.cluster_centers_[:, 0] + k_means.cluster_centers_[:, 1]*0.1)
mbk_order = np.argsort(mbk.cluster_centers_[:, 0] + mbk.cluster_centers_[:, 1]*0.1)
k_means_cluster_centers = k_means.cluster_centers_[k_order]
mbk_means_cluster_centers = mbk.cluster_centers_[mbk_order]

enter image description here

然而,正确的方法是先对齐聚类中心,然后强制执行(任意的)顺序。这应该可以完成工作:
mbk_order = pairwise_distances_argmin(k_means.cluster_centers_, mbk.cluster_centers_)
k_means_cluster_centers = k_means.cluster_centers_
mbk_means_cluster_centers = mbk.cluster_centers_[mbk_order]

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接