使用聚类算法查找文档列表中所有潜在相似文档。

3

我正在使用Quora问题对csv文件,将其加载到pd数据框中并隔离qid和问题,因此我的问题形式如下:

0        What is the step by step guide to invest in sh...
1        What is the step by step guide to invest in sh...
2        What is the story of Kohinoor (Koh-i-Noor) Dia...
3        What would happen if the Indian government sto...
.....
19408    What are the steps to solve this equation: [ma...
19409                           Is IMS noida good for BCA?
19410              How good is IMS Noida for studying BCA?

我的数据集实际上更大(50万个问题),但我将使用这些问题来展示我的问题。

我想要识别一对问题,这些问题有很高的可能性在询问同样的事情。我考虑了一种朴素的方法,即使用doc2vec将每个句子转换为向量,然后对于每个句子计算与其他句子的余弦相似度。然后,保留具有最高相似度的那个,并最终打印出所有具有足够高余弦相似度的内容。问题是这需要很长时间才能完成,因此我需要另一种方法。

然后我在另一个问题的答案中找到了一个建议,建议使用聚类来解决类似的问题。因此,以下是我基于该答案实施的代码。

"Load and transform the dataframe to a new one with only question ids and questions"
train_df = pd.read_csv("test.csv", encoding='utf-8')

questions_df=pd.wide_to_long(train_df,['qid','question'],i=['id'],j='drop')
questions_df=questions_df.drop_duplicates(['qid','question'])[['qid','question']]
questions_df.sort_values("qid", inplace=True)
questions_df=questions_df.reset_index(drop=True)

print(questions_df['question'])

# vectorization of the texts
vectorizer = TfidfVectorizer(stop_words="english")
X = vectorizer.fit_transform(questions_df['question'].values.astype('U'))
# used words (axis in our multi-dimensional space)
words = vectorizer.get_feature_names()
print("words", words)


n_clusters=30
number_of_seeds_to_try=10
max_iter = 300
number_of_process=2 # seads are distributed
model = KMeans(n_clusters=n_clusters, max_iter=max_iter, n_init=number_of_seeds_to_try, n_jobs=number_of_process).fit(X)

labels = model.labels_
# indices of preferable words in each cluster
ordered_words = model.cluster_centers_.argsort()[:, ::-1]

print("centers:", model.cluster_centers_)
print("labels", labels)
print("intertia:", model.inertia_)

texts_per_cluster = numpy.zeros(n_clusters)
for i_cluster in range(n_clusters):
    for label in labels:
        if label==i_cluster:
            texts_per_cluster[i_cluster] +=1

print("Top words per cluster:")
for i_cluster in range(n_clusters):
    print("Cluster:", i_cluster, "texts:", int(texts_per_cluster[i_cluster])),
    for term in ordered_words[i_cluster, :10]:
        print("\t"+words[term])

print("\n")
print("Prediction")

text_to_predict = "Why did Donald Trump win the elections?"
Y = vectorizer.transform([text_to_predict])
predicted_cluster = model.predict(Y)[0]
texts_per_cluster[predicted_cluster]+=1

print(text_to_predict)
print("Cluster:", predicted_cluster, "texts:", int(texts_per_cluster[predicted_cluster])),
for term in ordered_words[predicted_cluster, :10]:
    print("\t"+words[term])

我认为这样做可以为每个句子找到最有可能属于的聚类,然后计算该聚类中所有其他问题之间的余弦相似度。这样,我不需要在整个数据集上进行操作,只需在少量文档上进行即可。但是,使用“唐纳德·特朗普为什么赢得了选举?”作为示例句子,我得到了以下结果。

Prediction
Why did Donald Trump win the elections?
Cluster: 25 texts: 244
    trump
    donald
    clinton
    hillary
    president
    vote
    win
    election
    did
    think

我知道我的句子属于第25个聚类,并且可以看到该聚类的顶部单词。但是,我如何访问属于此聚类的句子?是否有任何方法可以做到这一点?

1个回答

0

您可以使用predict来获取聚类。然后使用numpy从特定的聚类中获取所有文档。

clusters = model.fit_predict(X_train)

clusterX = np.where(clusters==0) 

indices = X_train[clusterX]

现在,indices将包含来自该聚类的所有文档的索引。


我这个例子中的 X_train 是什么? - thelaw
questions_df['question'].values.astype('U') 或 X - rishi
@thelaw 如果您的问题得到解答,请接受此答案。 - rishi

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接